A very common technique when looking at general stochastic processes is to break them down into separate martingale and drift terms. This is easiest to describe in the discrete time situation. So, suppose that {\{X_n\}_{n=0,1,\ldots}} is a stochastic process adapted to the discrete-time filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_n\}_{n=0,1,\ldots},{\mathbb P})}. If X is integrable, then it is possible to decompose it into the sum of a martingale M and a process A, starting from zero, and such that {A_n} is {\mathcal{F}_{n-1}}-measurable for each {n\ge1}. That is, A is a predictable process. The martingale condition on M enforces the identity

\displaystyle  A_n-A_{n-1}={\mathbb E}[A_n-A_{n-1}\vert\mathcal{F}_{n-1}]={\mathbb E}[X_n-X_{n-1}\vert\mathcal{F}_{n-1}].

So, A is uniquely defined by

\displaystyle  A_n=\sum_{k=1}^n{\mathbb E}\left[X_k-X_{k-1}\vert\mathcal{F}_{k-1}\right], (1)

and is referred to as the compensator of X. This is just the predictable term in the Doob decomposition described at the start of the previous post.

In continuous time, where we work with respect to a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})}, the situation is much more complicated. There is no simple explicit formula such as (1) for the compensator of a process. Instead, it is defined as follows.

Definition 1 The compensator of a cadlag adapted process X is a predictable FV process A, with {A_0=0}, such that {X-A} is a local martingale.

For an arbitrary process, there is no guarantee that a compensator exists. From the previous post, however, we know exactly when it does. The processes for which a compensator exists are precisely the special semimartingales or, equivalently, the locally integrable semimartingales. Furthermore, if it exists, then the compensator is uniquely defined up to evanescence. Definition 1 is considerably different from equation (1) describing the discrete-time case. However, we will show that, at least for processes with integrable variation, the continuous-time definition does follow from the limit of discrete time compensators calculated along ever finer partitions (see below).

Although we know that compensators exist for all locally integrable semimartingales, the notion is often defined and used specifically for the case of adapted processes with locally integrable variation or, even, just integrable increasing processes. As with all FV processes, these are semimartingales, with stochastic integration for locally bounded integrands coinciding with Lebesgue-Stieltjes integration along the sample paths. As an example, consider a homogeneous Poisson process X with rate {\lambda}. The compensated Poisson process {M_t=X_t-\lambda t} is a martingale. So, X has compensator {\lambda t}.

We start by describing the jumps of the compensator, which can be done simply in terms of the jumps of the original process. Recall that the set of jump times {\{t\colon\Delta X_t\not=0\}} of a cadlag process are contained in the graphs of a sequence of stopping times, each of which is either predictable or totally inaccessible. We, therefore, only need to calculate {\Delta A_\tau} separately for the cases where {\tau} is a predictable stopping time and when it is totally inaccessible.

For the remainder of this post, it is assumed that the underlying filtered probability space is complete. Whenever we refer to the compensator of a process X, it will be understood that X is a special semimartingale. Also, the jump {\Delta X_t} of a process is defined to be zero at time {t=\infty}.

Lemma 2 Let A be the compensator of a process X. Then, for a stopping time {\tau},

  1. {\Delta A_\tau=0} if {\tau} is totally inaccessible.
  2. {\Delta A_\tau={\mathbb E}\left[\Delta X_\tau\vert\mathcal{F}_{\tau-}\right]} if {\tau} is predictable.

Proof: As A is a cadlag predictable process, we have {\Delta A_\tau=0} if {\tau} is totally inaccessible. Only the second statement remains to be proven.

Suppose that {\tau} is predictable. By definition of the compensator, the process {M=X-A} is a local martingale. Then, as {A_\tau} is {\mathcal{F}_{\tau-}}-measurable,

\displaystyle  \Delta A_\tau=\mathbb{E}[\Delta A_\tau\vert\mathcal{F}_{\tau-}]=\mathbb{E}[\Delta X_\tau\vert\mathcal{F}_{\tau-}] -\mathbb{E}[\Delta M_\tau\vert\mathcal{F}_{\tau-}].

It just needs to be shown that {\mathbb{E}[\Delta M_\tau\vert\mathcal{F}_{\tau-}]} is almost surely zero. By localising, we can suppose without loss of generality that {\Delta M} is integrable. From the classification of predictable stopping times, we know that {N\equiv\Delta M_\tau1_{[\tau,\infty)}} is a local martingale. Integrating any bounded predictable process {\xi} with respect to N gives a local martingale {\tilde N=\xi_\tau\Delta M_\tau1_{[\tau,\infty)}} which, as it is dominated in {L^1}, is a proper martingale. By optional sampling,

\displaystyle  {\mathbb E}\left[\xi_\tau\Delta M_\tau\right]={\mathbb E}\left[\tilde N_\tau\right]=0.

As every bounded {\mathcal{F}_{\tau-}}-measurable random variable can be written in the form {\xi_\tau} for a predictable process {\xi}, this gives {{\mathbb E}[\Delta M_\tau\vert\mathcal{F}_{\tau-}]=0} as required. \Box

In particular, this result shows that the compensator of any continuous special semimartingale is itself continuous. We can go further than this, though, and show that all quasi-left-continuous processes have continuous compensators. Recall that a cadlag process X is quasi-left-continuous if {X_{\tau-}=X_\tau} (almost-surely) for all predictable stopping times {\tau}. This covers many kinds of processes which are commonly studied, such as Feller processes which, in particular, includes all Lévy processes. So, even when studying non-continuous processes, it is often still the case that the compensator is continuous.

Corollary 3 If X is quasi-left-continuous then its compensator is a continuous FV process.

Furthermore, if X is increasing, then it is quasi-left-continuous if and only if its compensator is continuous.

Proof: If A is the compensator of X, then Lemma 2 implies that {\Delta A_\tau=0} at any inaccessible stopping time {\tau} and, for predictable times {\tau},

\displaystyle  \Delta A_\tau={\mathbb E}[\Delta X_\tau\vert\mathcal{F}_{\tau-}]=0.

The final equality uses the fact that X is quasi-left-continuous, so {\Delta X_\tau=0}.

Conversely, suppose that X is increasing with a continuous compensator. Then,

\displaystyle  {\mathbb E}[\Delta X_\tau\vert\mathcal{F}_{\tau-}]=\Delta A_t=0.

for any predictable stopping time {\tau}. However, as X is increasing, {\Delta X_\tau} is nonnegative, so {\Delta X_\tau=0} almost surely. \Box

The decomposition of special semimartingales given in the previous post can be modified to obtain a unique decomposition for all semimartingales, whether locally integrable or not. This can be done by subtracting out all jumps which are larger than 1 to obtain a locally bounded semimartingale and applying the decomposition to this to obtain equation (2) below. Lemma 2 implies that the jumps of the martingale term M obtained by doing this are uniformly bounded and, hence, M is a locally bounded martingale. Note that, by combining the final two terms on the right hand side of (2) into a single finite variation term, we decompose the semimartingale X into a locally bounded martingale and an FV process, as stated in the Bichteler-Dellacherie theorem.

Lemma 4 Every semimartingale X decomposes uniquely as

\displaystyle  X_t=M_t+A_t+\sum_{s\le t}1_{\{\vert\Delta X_s\vert > 1\}}\Delta X_s, (2)

where M is a locally bounded martingale and A is a predictable FV process with {A_0=0}.

Furthermore, {\Delta A} is bounded by 1 and, hence, {\Delta M=1_{\{\vert\Delta X\vert\le1\}}\Delta X-\Delta A} is bounded by 2.

Proof: As X is cadlag, it can only have finitely many jumps larger than 1 in any finite interval, so

\displaystyle  V_t\equiv\sum_{s\le t}1_{\{\vert\Delta X_s\vert > 1\}}\Delta X_s

is a well-defined FV process. So, {Y\equiv X-V} is a semimartingale with jumps bounded by 1. Therefore, Y is locally bounded and, in particular, is locally integrable. Applying the special semimartingale decomposition {Y=M+A} gives (1). It still remains to show that M is locally bounded.

Now, for any predictable stopping time {\tau}, Lemma 2 gives

\displaystyle  \vert\Delta A_\tau\vert\le{\mathbb E}[\vert\Delta Y_\tau\vert\;\vert\mathcal{F}_\tau]\le1.

So {\vert\Delta A\vert\le1} and {\vert\Delta M\vert\le\vert\Delta Y\vert+\vert\Delta A\vert\le2} as required. In particular, {\Delta M} is uniformly bounded, so M is a locally bounded martingale. \Box

Recall that, for a locally bounded integrand {\xi}, stochastic integration preserves the properties of being a local martingale, and also, of being a predictable FV process. Consequently, if a process X has compensator A, then {\int\xi\,dX} has compensator {\int\xi\,dA}. So, taking compensators commutes with stochastic integration. This can be generalised slightly to non-locally-bounded integrands.

Lemma 5 Suppose that X has compensator A and that {\xi} is a predictable X-integrable process such that {\int\xi\,dX} is locally integrable. Then, {\xi} is A-integrable and {\int\xi\,dX} has compensator {\int\xi\,dA}.

Proof: This is just a restatement of Theorem 3 of the previous post. We can write {X=M+A} for a local martingale M. Then, {\xi} is both M-integrable and A-integrable. Furthermore, {\int\xi\,dA} is a predictable FV process and {\int\xi\,dX-\int\xi\,dA=\int\xi\,dM} is a local martingale. \Box

Similarly, taking the compensator of a process commutes with continuous time-changes. A time-change is defined by a set of finite stopping times {\{\tau_t\colon t\in{\mathbb R}_+\}} such that {\tau_s\le\tau_t} whenever {s\le t}, and we say that this defines a continuous time-change if {t\mapsto\tau_t} is almost-surely continuous. This can be used to transform the filtration into the time-changed filtration {\mathcal{\tilde F}_t\equiv\mathcal{F}_{\tau_t}}. Similarly, if X is any stochastic process, then {\tilde X_t\equiv X_{\tau_t}} is the time-changed process. We say that {\tilde X} is a continuous time-change of X. If X is progressively measurable then {\tilde X} will be {\mathcal{\tilde F}_t}adapted.

Lemma 6 Suppose that X has compensator A and that {\{\tau_t\}_{t\ge0}} is a continuous time-change. Then, {\tilde A_t\equiv A_{\tau_t}} is the compensator of the time-changed process {\tilde X_t\equiv X_{\tau_t}}, with respect to the filtration {\mathcal{\tilde F}_t\equiv\mathcal{F}_{\tau_t}}.

Proof: By definition, {M=X-A} is a local martingale so, as previously shown, the time changed process {\tilde M_t\equiv M_{\tau_t}} is a local martingale with respect to {\mathcal{\tilde F}_\cdot}.

This shows that {\tilde X-\tilde A} is an {\mathcal{\tilde F}_\cdot}-local martingale. It remains to show that {\tilde A} is a predictable FV process. First, as the time change is continuous, {\tilde A} will be cadlag. Furthermore, the variation of {\tilde A} over an interval {[0,t]} is equal to the variation of A over {[\tau_0,\tau_t]}, which is almost surely finite. Also, as previously shown, continuous time changes take predictable processes to predictable processes. Therefore, {\tilde A} is a predictable FV process. \Box

Processes with Locally Integrable Variation

We now specialise to processes with locally integrable variation. To say that a cadlag adapted process X has locally integrable variation means that there is a sequence of stopping times {\tau_n} increasing to infinity, and such that the variations {\int_0^{\tau_n}\,\vert dX\vert} are all integrable. This is equivalent to X being a locally integrable FV process.

Lemma 7 Let X be a cadlag adapted process. Then, the following are equivalent.

  • X has locally integrable variation.
  • X is a locally integrable FV process.

Proof: If X has locally integrable variation then, in particular, it has locally finite variation and must be an FV process. It only needs to be shown that, for FV processes, the property of being locally integrable is equivalent to having locally integrable variation. However, local integrability of a cadlag adapted process is equivalent to the local integrability of its jumps. Also, the variation process {V_t=\int_0^t\,\vert dX\vert} has jumps {\Delta V=\vert\Delta X\vert}. This gives

X is locally integrable {\Delta X} is locally integrable
{\Delta V} is locally integrable
V is locally integrable

as required. \Box

In particular, as cadlag predictable processes are locally bounded, this means that compensators automatically have locally integrable variation. However, if X has locally integrable variation, then we can go a bit further and bound the expected variation of its compensator.

Lemma 8 Let X be a cadlag adapted process with locally integrable variation, and let A be its compensator. Then,

\displaystyle  {\mathbb E}\left[\int_0^\tau\vert\xi\vert\,\vert dA\vert\right]\le{\mathbb E}\left[\int_0^\tau\vert\xi\vert\,\vert dX\vert\right] (3)

for all stopping times {\tau} and predictable processes {\xi}. Furthermore, if the right hand side of (3) is finite then

\displaystyle  {\mathbb E}\left[\int_0^\tau\xi\,dA\right]={\mathbb E}\left[\int_0^\tau\xi\,dX\right]. (4)

Proof: By applying monotone convergence to (3) and dominated convergence to (4), it is enough to consider the case where {\xi} is bounded. Furthermore, replacing {\xi} by {1_{[0,\tau]}\xi} if necessary, we only need to consider the case with {\tau=\infty}.

Suppose that {\xi} is bounded, and let {M=X-A}. Then, {\int\xi\,dM} is a local martingale. So, there exists stopping times {\tau_n} increasing to infinity such that {\int_0^{t\wedge\tau_n}\xi\,dM} are uniformly integrable martingales, hence have zero expectation, and such that {X^{\tau_n}} has integrable variation. Therefore,

\displaystyle  {\mathbb E}\left[\int_0^{\tau_n}\xi\,dX\right]={\mathbb E}\left[\int_0^{\tau_n}\xi\,dA\right]. (5)

As A is a predictable FV process, there exists a predictable process {\alpha} with {\vert\alpha\vert=1} such that {\int\alpha\,dA=\int\,\vert dA\vert} is the variation of A. Replacing {\xi} by {\vert\xi\vert\alpha} in (5) gives

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[\int_0^{\tau_n}\vert\xi\vert\,\vert dA\vert\right]&\displaystyle={\mathbb E}\left[\int_0^{\tau_n}\vert\xi\vert\alpha\,dA\right]\smallskip\\ &\displaystyle={\mathbb E}\left[\int_0^{\tau_n}\vert\xi\vert\alpha\,dX\right]\smallskip\\ &\displaystyle\le{\mathbb E}\left[\int_0^{\tau_n}\vert\xi\,\vert\vert dX\vert\right]. \end{array}

Letting n increase to infinity and using monotone convergence gives (3).

Now, suppose that the right hand side of (3) is finite. Then, {N\equiv\int\xi\,dX-\int\xi\,dA} is a local martingale with integrable variation, so is a martingale dominated in {L^1}. Therefore, {{\mathbb E}[N_\infty]=0}, giving (4). \Box

We can go even further and restrict to increasing processes. In that case, compensators are themselves increasing, and we get equality between expectations of stochastic integrals with respect to a process and with respect to its compensator. This is sometimes used for the definition of the compensator.

Lemma 9 Let X be a cadlag, adapted and locally integrable increasing process. Then, its compensator A is also increasing and,

\displaystyle  {\mathbb E}\left[\int_0^\tau\xi\,dX\right]={\mathbb E}\left[\int_0^\tau\xi\,dA\right] (6)

for all stopping times {\tau} and nonnegative predictable processes {\xi}.

Furthermore, the compensator A of X is the unique right-continuous predictable and increasing process with {A_0=0} which satisfies (6) for all nonnegative predictable {\xi} and {\tau\equiv\infty}.

Proof: As with all predictable FV processes, A decomposes into the difference of increasing processes, {A=A^+-A^-}, and there is a predictable set S such that {A^-=-\int1_S\,dA}. Choose stopping times {\tau_n} increasing to infinity such that {\int_0^{\tau_n}1_S\,dX} are integrable. Then, (4) gives

\displaystyle  {\mathbb E}[A^-_{\tau_n}]=-{\mathbb E}\left[\int_0^{\tau_n}1_S\,dX\right]\le0.

So, {A^-_{\tau_n}=0} almost surely. Therefore, {A_\infty=0} and, as it is increasing from 0, A is identically zero. So, {A=A^+} is increasing.

Now, let {\xi} be a nonnegative predictable process. By monotone convergence, it is enough to prove (6) in the case where {\xi} is bounded. Then, there exists stopping times {\tau_n} increasing to infinity such that {\int_0^{\tau_n}\xi\,dX} are integrable. Equation (4) gives

\displaystyle  {\mathbb E}\left[\int_0^{\tau_n\wedge\tau}\xi\,dX\right]={\mathbb E}\left[\int_0^{\tau_n\wedge\tau}\xi\,dA\right].

Letting n go to infinity and using monotone convergence gives the result.

We now prove the `furthermore’ part of the lemma. It just needs to be shown that if A is right-continuous, predictable and increasing with {A_0=0}, then identity (6) just for the case with {\tau=\infty} is enough to guarantee that A is the compensator of X. Equivalently, that {M\equiv X-A} is a local martingale.

As X is locally integrable, there exists a sequence of stopping times {\tau_n} increasing to infinity such that {X_{\tau_n}-X_0} is integrable. Then, for any nonnegative predictable {\xi},

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[\int_0^\infty\xi\,dX^{\tau_n}\right]&\displaystyle={\mathbb E}\left[\int_0^\infty1_{(0,\tau_n]}\xi\,dX\right]\smallskip\\&\displaystyle={\mathbb E}\left[\int_0^\infty1_{(0,\tau_n]}\xi\,dA\right]\smallskip\\&\displaystyle={\mathbb E}\left[\int_0^\infty\xi\,dA^{\tau_n}\right]. \end{array}

In particular, taking {\xi=1} shows that {A_{\tau_n}} is integrable and, hence, that {1_{\{\tau_n > 0\}}M^{\tau_n}} is integrable. Then, letting {\xi} be any nonnegative elementary integrand gives {{\mathbb E}[\int\xi\,dM^{\tau_n}]=0} and, so, {1_{\{\tau_n > 0\}}M^{\tau_n}} is a martingale. Therefore, {X-A} is a local martingale as required. \Box

Approximation by the Discrete-Time Compensator

The definition of the compensator of a continuous-time process given by Definition 1 above does appear to be considerably different from the much simpler case of discrete-time compensators given by equation (1). It is natural to ask whether the continuous-time situation does really arise from the discrete-time formula applied in the limit over small time steps. For processes with integrable variation this is indeed the case, although some care does need to be with how we take the limit. The idea is to discretise time using a partition, apply (1) to obtain an approximation to the compensator, then take the limit under the appropriate topology as the mesh of the partition goes to zero.

Define a stochastic partition of {{\mathbb R}_+} to be a sequence of stopping times

\displaystyle  0=\tau_0\le\tau_1\le\tau_2\le\cdots\uparrow\infty.

The mesh of the partition is denoted by {\vert P\vert=\sup_n(\tau_n-\tau_{n-1})}. Then, given an integrable process X we define its compensator along P by

\displaystyle  A^P_t=\sum_{n=1}^\infty1_{\{\tau_{n-1} < t\}}{\mathbb E}\left[X_{\tau_n}-X_{\tau_{n-1}}\vert\mathcal{F}_{\tau_{n-1}}\right]. (7)

Now letting the mesh of P go to zero, the question is whether or not {A^P} tends to A. The precise answer will depend on the topology in which we take the limit. However, in the quasi-left-continuous case it turns out that convergence occurs uniformly in {L^1}, which is about as strong a mode of convergence as we could have hoped for. There is one further technical point; the mesh {\vert P\vert} is itself a random variable. So, in taking the limit as {\vert P\vert} goes to zero, it is also necessary to state the topology under which {\vert P\vert\rightarrow0} is to be understood. In order to obtain a strong result, it is best to use as weak a topology as possible. I use convergence in probability, denoted by {\vert P\vert\xrightarrow{\rm P}0} here.

Theorem 10 Let X be a cadlag adapted process with integrable total variation, and A be its compensator. If X is quasi-left-continuous or, more generally, if A is continuous, then {A^P} tends uniformly to A in {L^1}. That is,

\displaystyle  {\mathbb E}\left[\sup_{t\in{\mathbb R}_+}\left\vert A^P_t-A_t\right\vert\right]\rightarrow0

as {\vert P\vert\xrightarrow{\rm P}0}.

Stated explicitly, this convergence means that for each {\epsilon > 0} there exists a {\delta > 0} such that {{\mathbb E}[\sup_t\vert A^P_t-A_t\vert] < \epsilon} for all partitions P satisfying {{\mathbb P}(\vert P\vert > \delta) < \delta}. Or, in terms of sequences, if {P_n} is a sequence of partitions with {\vert P_n\vert} tending to zero in probability, then {\sup_t\vert A^{P_n}_t-A_t\vert} tends to zero in {L^1}.

The proof of Theorem 10 will be given in a moment but, first, let’s consider what happens when A is not continuous. Then Theorem 10 does not apply, and convergence does not occur uniformly in {L^1}. In fact, {A^P} need not converge to A in {L^1} at any positive time. Even if we were to just look at the weaker notion of convergence in probability, the limit still need not exist. As I’ll show using an example in an upcoming post, what can go wrong is that, at a jump time of A, the approximation {A^P} can randomly overshoot or undershoot the jump by an amount and probability which does not vanish as the mesh goes to zero. In a sense, though, the jump in the approximation will match that of A on average. We can capture this, rather weak notion of convergence, by the weak topology on {L^1}. A sequence {Z_n} of integrable random variables is said to converge weakly to the (integrable) limit Z if {{\mathbb E}[Z_nY]\rightarrow{\mathbb E}[ZY]} as n goes to infinity for any bounded random variable Y. As an example demonstrating that weak convergence does not imply convergence in probability, consider a sequence {X_n} of independent random variables, each with the uniform distribution on [-1,1]. This cannot possibly converge to anything in probability but, with respect to the sigma-algebra generated by {\{X_n\}}, it does converge to zero in the weak topology. If Y is measurable with respect to finitely many of the {X_n} then, by independence, {{\mathbb E}[X_nY]=0} for all but finitely many n. The set of such Y is dense in {L^1}, from which it follows that {{\mathbb E}[X_nY]\rightarrow0} as {n\rightarrow\infty} for all integrable Y, so {X_n\rightarrow0} weakly.

Now, it is true that the discrete-time approximations {A^P} do converge weakly to the compensator at each time.

Theorem 11 Let X be a cadlag adapted process with integrable total variation, and A be its compensator. Then {A^P} tends to A under the weak topology in {L^1} at each time. More precisely, for any random time {\tau\colon\Omega\rightarrow{\mathbb R}_+\cup\{\infty\}},

\displaystyle  A^P_\tau\rightarrow A_\tau

weakly in {L^1} as {\vert P\vert\xrightarrow{\rm P}0}.

Stated explicitly, this means that for each uniformly bounded random variable Y and constant {\epsilon > 0}, there exists {\delta > 0} such that {\vert{\mathbb E}[(A^P_\tau-A_\tau)Y]\vert < \epsilon} for all partitions P satisfying {{\mathbb P}(\vert P\vert > \delta) < \delta}. Or, in terms of sequences, if {P_n} is a sequence of partitions with mesh tending to zero in probability and Y is a uniformly bounded random variable, then {{\mathbb E}[A^{P_n}_\tau Y]\rightarrow{\mathbb E}[A_\tau Y]}. Also note that, in Theorem 11, the time {\tau} is any random time, not necessarily a stopping time.

In some approaches, Theorem 10, 11, or an equivalent, is used to prove the existence of compensators in the first place. That is, these results are proved without the a-priori assumption that compensators exist. In the treatment given in these notes we have already proved the existence of the compensator by other means, and just need to show that the approximations do indeed converge to the expected limit.

Before moving on the proofs of these two theorems, let us note some simple facts concerning the definitions. First, for the sake of brevity, given any process {Z_n} depending on the discrete parameter n, I will use the notation {\delta Z_n} to denote the difference {Z_n-Z_{n-1}}. As the process X has integrable variation, the same is true for A (Lemma 8). Therefore {X-A} is an {L^1}-dominated local martingale, so is a true martingale, and is uniformly integrable. So {{\mathbb E}[X_\tau-X_\sigma\vert\mathcal{F}_\sigma]={\mathbb E}[A_\tau-A_\sigma\vert\mathcal{F}_\sigma]} for any stopping times {\sigma\le\tau}. We can rewrite the definition of {A^P} to express it in terms of the compensator A instead of X. Doing this, equation (7) is replaced by

\displaystyle  A^P_t=\sum_{n=1}^\infty1_{\{\tau_{n-1} < t\}}{\mathbb E}\left[\delta A_{\tau_n}\vert\mathcal{F}_{\tau_{n-1}}\right]. (8)

Equivalently, {A^P} is left-continuous and is constant over each of the intervals {(\tau_{n-1},\tau_n]}, with {A^P_0=0} and

\displaystyle  \delta A^P_{\tau_n}={\mathbb E}\left[\delta A_{\tau_n}\vert\mathcal{F}_{\tau_{n-1}}\right]. (9)

In particular, Jensen’s inequality gives {{\mathbb E}[\vert\delta A^P_{\tau_n}\vert]\le{\mathbb E}[\vert\delta A_{\tau_n}\vert]}. So, summing over n, the expected total variation of {A^P} is

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[\sum_{n=1}^\infty\left\vert\delta A^P_{\tau_n}\right\vert\right]\le{\mathbb E}\left[\sum_{n=1}^\infty\vert\delta A_{\tau_n}\vert\right], \end{array} (10)

which is bounded by the expected total variation of A.

Proof of Theorem 10: Corollary 3 tells us that A is continuous whenever X is quasi-left-continuous. Let us start by considering the case where the A has total variation bounded by some positive constant K, and define the process {M_n=A_{\tau_n}-A^P_{\tau_n}} over nonnegative integer n. This is a discrete-time {\mathcal{F}_{\tau_n}}-adapted process and (9) tells us that {{\mathbb E}[\delta M_n\vert\mathcal{F}_{\tau_{n-1}}]=0}. So, M is a martingale. Then, we can obtain the following sequence of inequalities,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[\sup_nM_n^2\right]&\displaystyle\le4\lim_{n\rightarrow\infty}{\mathbb E}[M_n^2]\smallskip\\ &\displaystyle=4\sum_n{\mathbb E}[(\delta M_n)^2]\smallskip\\ &\displaystyle\le8\sum_n{\mathbb E}\left[(\delta A_{\tau_n})^2+{\mathbb E}[\delta A_{\tau_n}\vert\mathcal{F}_{n-1}]^2\right]\smallskip\\ &\displaystyle\le16\sum_n{\mathbb E}[(\delta A_{\tau_n})^2]\\ &\displaystyle\le16K{\mathbb E}\left[\sup_n\vert \delta A_{\tau_n}\vert\right]. \end{array} (11)

The first line is Doob’s L2 martingale inequality. The second is the Ito isometry which, in discrete-time, just consists in expanding out {M_n^2} as {\sum_{j,k\le n}\delta M_j\delta M_k} and then noting that the martingale property implies that {{\mathbb E}[\delta M_j\delta M_k]=0} for all {j\not=k}. The third line is using (9) to expand {\delta M_n} as the difference of {\delta A_{\tau_n}} and {{\mathbb E}[\delta A_{\tau_n}\vert\mathcal{F}_{\tau_{n-1}}]}, and the Cauchy-Schwarz inequality to bound by the sum of squares. The fourth line is using Jensen’s inequality to move the square inside the conditional expectation. The final line is using {(\delta A_{\tau_n})^2\le\vert\delta A_{\tau_n}\vert\sup_n\vert\delta A_n\vert}, summing over n and bounding {\sum_n\vert\delta A_{\tau_n}\vert} by K, as was assumed above.

Next, using the fact that {A^P} is constant on each interval {(\tau_{n-1},\tau_n]},

\displaystyle  \sup_{t\in(\tau_{n-1},\tau_n]}\vert A^P_t-A_t\vert\le\vert M_n\vert+\sup_{t\in(\tau_{n-1},\tau_n]}\vert A_{\tau_n}-A_t\vert.

If we square, take the supremum over n, and take the expected value of this, then use (11) to bound the {{\mathbb E}[\sup_n M_n^2]} term, we obtain the bound

\displaystyle  {\mathbb E}\left[\sup_{t\in{\mathbb R}_+}(A^P_t-A_t)^2\right]\le(32K+2){\mathbb E}\left[\sup_{\vert t-s\vert \le\vert P\vert}\vert A_t-A_s\vert\right]. (12)

However, as A is continuous with finite total variation, it is uniformly continuous. That is, {\vert A_t-A_s\vert} tends to zero as {\vert t-s\vert} goes to zero. So, if {P_n} is a sequence of partitions with mesh going to zero in probability, then {\sup_{\vert t-s\vert\le\vert P_n\vert}\vert A_t-A_s\vert} tends to zero in probability as n goes to infinity. As A is uniformly bounded by K, dominated convergence implies that the right hand side of (12) tends to zero as {\vert P\vert} goes to zero in probability and, therefore {A^P\rightarrow A} uniformly in {L^2} and, hence, in {L^1}. This completes the proof when A has uniformly bounded variation.

Finally, consider the case where A has integrable total variation {V_\infty}. Then, for any fixed {K > 0} let {\tau} be the first time at which the variation of A hits K. This is a stopping time, and the stopped process {B\equiv A^\tau} has variation bounded by K. Letting {C=A-A^\tau}, then we can define the compensators of B and C on the partition P as above. By linearity of the definition, {A^P=B^P+C^P} and

\displaystyle  \sup_{t\in{\mathbb R}_+}\vert A^P_t-A\vert\le\sup_{t\in{\mathbb R}_+}\vert B^P_t-B_t\vert+\sup_{t\in{\mathbb R}_+}\vert C^P_t-C_t\vert.

By the argument above, the first term on the right hand side tends to zero in {L^1} as the mesh of P goes to zero in probability. The second term is bounded by the total variation of {C^P} and C, so its expected value is bounded by twice the expected variation of C. If the {V_t} is the variation of A over intervals {[0,t]} then the total variation of C is {V_\infty-V_\tau}. Then,

\displaystyle  \limsup_{\vert P\vert\rightarrow0}{\mathbb E}\left[\sup_{t\in{\mathbb R}_+}\vert A^P_t-A_t\vert\right]\le2{\mathbb E}\left[ V_\infty-V_\tau\right]

However, {V_\infty-V_\tau} is bounded by {V_\infty} and, as K goes to infinity, then {\tau} goes to infinity. Dominated convergence shows that the right hand side can be made as small as we like by making K large, so the left hand side is zero as required. {\Box}

Now that the proof of Theorem 10 is out of the way, we can move on to the case where the compensator is not continuous and give a proof of Theorem 11. In this case, we do not obtain uniform convergence in {L^1} as discussed above and, instead, only manage weak convergence. One way of proving is to justify the following sequence of equalities and limit. Letting M be a cadlag version of the process {M_t={\mathbb E}[1_{\{t < \tau\}}Y\vert\mathcal{F}_t]} and {M^P_t=\sum_n1_{\{\tau_{n-1} < t\le\tau_n\}}M_{\tau_{n-1}}} be an approximation on the partition P,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[A^P_\tau Y\right]&\displaystyle=\sum_n{\mathbb E}\left[\delta A_{\tau_n}M_{\tau_{n-1}}\right]\smallskip\\ &\displaystyle={\mathbb E}\left[\int_0^\infty M^P\,dA\right]\smallskip\\ &\displaystyle\rightarrow{\mathbb E}\left[\int_0^\infty M_{s-}\,dA_s\right]\smallskip\\ &\displaystyle={\mathbb E}\left[\int_0^\infty Y1_{[0,\tau]}\,dA\right]\smallskip\\ &\displaystyle={\mathbb E}\left[A_\tau Y\right] \end{array}

The limit here is just using the fact that {M^P_s\rightarrow M_{s-}} as the mesh of P goes to zero. So, according to this method, convergence in the weak topology is a consequence of dominated convergence. However, as we already have Theorem 10 giving the result when the compensator is continuous, we can simplify the proof of Theorem 11. It is only necessary to prove the case where A has a single discontinuity at a predictable stopping time, which can then be pieced together with the continuous case to obtain the result.

Lemma 12 Suppose that {A=U1_{[\sigma,\infty)}} for a predictable stopping time {\sigma > 0} and integrable {\mathcal{F}_{\sigma-}}-measurable random variable U. Then, {A^P_\tau\rightarrow A_\tau} weakly in {L^1} for all random times {\tau}.

Proof: Let us first suppose that {\tau} is a stopping time, and let Y be any uniformly bounded random variable. Also, suppose for now that the filtration is right-continuous, so that we can take a cadlag version of the martingale {M_t={\mathbb E}[Y\vert\mathcal{F}_t]}. This is uniformly bounded and, by optional sampling, we have {M_{\tau_n}={\mathbb E}[Y\vert\mathcal{F}_{\tau_n}]}. Expanding out {A^P_\tau} using equation (8), which is absolutely convergent in {L^1} by (10), we obtain,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[A^P_\tau Y\right]&\displaystyle=\sum_n{\mathbb E}\left[1_{\{\tau_{n-1} < \tau\}}{\mathbb E}[\delta A_{\tau_n}\vert\mathcal{F}_{\tau_{n-1}}]Y\right]\smallskip\\ &\displaystyle=\sum_n{\mathbb E}\left[1_{\{\tau_{n-1} < \tau\}}M_{\tau_{n-1}}\delta A_{\tau_n}\right]. \end{array}

However, {\delta A_{\tau_n}} is zero unless {\tau_{n-1} < \sigma\le\tau_n}, in which case it is equal to U. So, letting {\tau^P_*} denote the maximum {\tau_n} less than {\sigma},

\displaystyle  {\mathbb E}[A^P_\tau Y]={\mathbb E}[1_{\{\tau^P_* < \tau\}}M_{\tau^P_*}U]

Letting the mesh of P go to zero, {\tau^P_*} tends to {\sigma} from the left,

\displaystyle  \lim_{\vert P\vert\rightarrow0}{\mathbb E}[A^P_\tau Y]={\mathbb E}\left[1_{\{\sigma \le\tau,\sigma < \infty\}}M_{\sigma-}U\right].

Now, as {\sigma} is predictable, there exists a sequence of stopping times {\sigma_m} strictly increasing to {\sigma}. Then, {M_{\sigma_m}={\mathbb E}[Y\vert\mathcal{F}_{\sigma_m}]} tends to {M_{\sigma-}} but, by Levy’s upwards convergence theorem, it also converges to {{\mathbb E}[Y\vert\mathcal{F}_{\sigma-}]}.

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle\lim_{\vert P\vert\rightarrow0}{\mathbb E}[A^P_\tau Y]&\displaystyle={\mathbb E}\left[1_{\{\sigma \le\tau,\sigma < \infty\}}{\mathbb E}[Y\vert\mathcal{F}_{\sigma-}]U\right]\smallskip\\ &\displaystyle={\mathbb E}\left[1_{\{\sigma \le\tau,\sigma < \infty\}}YU\right]\smallskip\\ &\displaystyle={\mathbb E}\left[A_\tau Y\right] \end{array}

as required. Here, we have used the fact that U is {\mathcal{F}_{\sigma-}}-measurable. This proves the case where {\tau} is a stopping time.

Now, suppose that {\tau} is any random time. Noting that, from the definition, {\delta A^P_{\tau_n}} is zero whenever {\tau_{n-1}\ge\sigma},

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}[1_{\{\tau\ge\sigma\}} A^P_\tau Y]&\displaystyle={\mathbb E}[1_{\{\tau\ge\sigma\}}A^P_\sigma Y]\smallskip\\ &\displaystyle\rightarrow{\mathbb E}[1_{\{\tau\ge\sigma\}}A_\sigma Y]\smallskip\\ &\displaystyle={\mathbb E}[1_{\{\tau\ge\sigma\}}A_\tau Y] \end{array} (13)

as the mesh of P goes to zero in probability. This limit follows from the argument above applied to {1_{\{\tau\ge\sigma\}}Y}. On the other hand, let {\sigma_m} be a sequence of stopping times strictly increasing to {\sigma}. Without loss of generality, we can suppose that U and Y are nonnegative. Then, as {A^P} is increasing,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}[1_{\{\tau < \sigma\}}A^P_\tau Y]&\displaystyle\le{\mathbb E}[1_{\{\tau\le\sigma_m\}}A^P_{\sigma_m}Y]+{\mathbb E}[1_{\{\sigma_m < \tau < \sigma\}}A^P_\sigma Y]\smallskip\\ &\displaystyle\rightarrow{\mathbb E}[1_{\{\tau\le\sigma_m\}}A_{\sigma_m}Y]+{\mathbb E}[1_{\{\sigma_m < \tau < \sigma\}}A_\sigma Y] \end{array}

Again, this limit follows from the argument above, at the stopping times {\sigma_m} and {\sigma}. The first term on the right-hand-side is zero, as {A_{\sigma_m}=0}. The second term can be made as small as we like by choosing m large. So, combining this with (13) gives

\displaystyle  {\mathbb E}[A^P_\tau Y]\rightarrow{\mathbb E}[1_{\{\tau\ge\sigma\}}A_\tau Y]={\mathbb E}[A_\tau Y]

as required.

This completes the proof of the lemma in the case where the filtration is right-continuous, so that the martingale M has a cadlag version. The only reason why a cadlag version was required was so that the optional sampling result {M_\tau={\mathbb E}[Y\vert\mathcal{F}_\tau]} holds for stopping times {\tau}, and so that the left-limit {M_{\sigma-}=\lim_{s\uparrow\uparrow\sigma}M_s} is well-defined. However, it is not necessary for right-continuity of the filtration for us to be able to satisfy these properties. Rather than taking a cadlag version of M, there always exists a version with left and right limits everywhere which is right-continuous outside of a fixed countable set. Furthermore, optional sampling still holds for such versions, and the argument above carries through unchanged. \Box

Combining this lemma with Theorem 10 completes the proof of Theorem 11.

Proof of Theorem 11: As the compensator A is predictable, there exists a sequence of predictable stopping times {\sigma_n > 0} such that {\sigma_m\not=\sigma_n} whenever {m\not=n} and {\sigma_n < \infty}, and such that {\bigcup_n[\sigma_n]} contains all the jump times of A. So, we can decompose A into a continuous term plus a sum over its jumps

\displaystyle  A=A^c+\sum_{m=1}^\infty1_{[\sigma_m,\infty)}\Delta A_{\sigma_m}.

Furthermore, the sum of the variations of these terms is equal to the variation of A, so it converges uniformly in {L^1}. This means that we can calculate the compensator of each these terms along the partition P, multiply by a uniformly bounded random variable Y, and take expectations to get

\displaystyle  {\mathbb E}[A^P_\tau Y]={\mathbb E}\left[(A^c)^P_\tau Y\right]+\sum_{m=1}^\infty{\mathbb E}\left[\left(1_{[\sigma_m,\infty)}\Delta A_{\sigma_m}\right)^P_\tau Y\right].

The term {{\mathbb E}[(A^c)^P_\tau Y]} converges to {{\mathbb E}[A^c_\tau Y]} as the mesh of the partition goes to zero, by Theorem 10. As A is predictable, so that {\Delta A_{\sigma_m}} is {\mathcal{F}_{\sigma_m-}}-measurable, Lemma 12 guarantees that the terms inside the sum converge to {{\mathbb E}[(1_{[\sigma_m,\infty)}\Delta A_{\sigma_m})_\tau Y]}. Also, equation (10) says that the terms inside the sum are bounded by {{\mathbb E}[\vert \Delta A_{\sigma_m} Y\vert]}, which has finite sum. So, dominated convergence allows us to exchange the limit with the summation,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle\lim_{\vert P\vert\rightarrow0}{\mathbb E}\left[A^P_\tau Y\right]&\displaystyle=\lim_{\vert P\vert\rightarrow0}{\mathbb E}[(A^c)^P_\tau Y]+\sum_{m=1}^\infty\lim_{\vert P\vert\rightarrow0}{\mathbb E}\left[\left(1_{[\sigma_m,\infty)}\Delta A_{\sigma_m}\right)^P_\tau Y\right]\smallskip\\ &\displaystyle={\mathbb E}\left[A^c_\tau Y\right]+\sum_{n=1}^\infty{\mathbb E}\left[\left(1_{[\sigma_n,\infty)}\Delta A_{\sigma_m}\right)_\tau Y\right]\smallskip\\ &\displaystyle={\mathbb E}[A_\tau Y] \end{array}

as required.{\Box}

16 thoughts on “Compensators

  1. Hi George,

    At the beginning of Lemma 7’s proof and also in the beginning if the preceding post, it is mentioned that a process of Locally Finite Variation is a FV process.

    As defined in these note in the beginning of the “Properties of Stochastic Integral” post, a process is a FV process, if it is càdlàg, adapted with respect to a complete filtered probability space, and such that with probability one it has finite variations over bounded time intervals.

    I don’t see how it is obvious that a Locally Finite Variation process is a FV process. Maybe I got it wrong but there is nothing in the Localization procedure that entails this fact. Could you elaborate about this ?

    Best Regards

      1. Hi,

        “Locally finite variation implies finite variation over every bounded interval” this is precisely this point that I cannot see.

        If X_t has locally finite variation this means there exists a sequence of stopping times \tau_n increasing almost surely to +\infty such that the stopped process X_t^{\tau_n} has finite variation over bounded interval, right ?

        But why should this property hold true when passing to the limit ?

        I think I miss something obvious sorry about that…

        Best regards

        1. Well, if there exists a sequence of random times \tau_n (stopping times or not) which increase to infinity, and X^{\tau_n} has finite variation over the interval [0,t], then X must have finite variation on [0,t]. This follows because we have \tau_n \ge t for large enough n.

        2. Hi George,

          I think I got it this time. To be complete, I ‘ll try to prove the equivalence. The contrapositve proposition is the easiest way to see this for me. So i try to show that X not FV implies X not of Locally Finite Variation.

          If X is not FV, then there exists an event A of strictly postive probability over which X is of infinite variation over some interval [0,t]. In that case, X cannot be of Locally Finite Variation, because for any sequence of stopping time increasing almost surely to +\infty the stopped process as n \to \infty is of infinite variation over [0,t] conditionally on the set A of strictly positive probability.

          Best regards

        3. Yes, that works, although I don’t really think that going to the contrapositive is easier. Just note that, for any finite t, we (almost surely) have \tau_n\ge t for large enough n, and finite variation on [0,\tau_n], so finite variation on [0,t].

        4. Hi George,

          I am not sure to get exactly what you mean when you say :
          “for any finite t>0, we (almost surely) have \tau_n\ge t for large enough n,”

          What about \tau_n=n.\tau_1 for \tau_1 an absolutely continuous random variable of full support over \mathbb{R}^+ (for example an exponential rv of parameter 1) ?
          This sequence is a proper localizing sequence of stopping times as it is increasing almost surely to +\infty, but for a fixed time t>0, and any n, we don’t have almost surely that \tau_n\ge t as for any n we have P(\tau_n < t) > 0.
          I think that it’s this point that has disturbed from the begining and which is why I had to use the contrapositive proposition to convince myself.

          Best regards

        5. “for almost every ω ∈ Ω there exists an n such that \tau_n(\omega)\ge t …” is maybe a clearer way of saying it. That is, n depends on ω. This is implied by \tau_n almost surely tending to infinity (it’s equivalent to \tau_n being almost surely unbounded). It can be unclear sometimes exactly what is held fixed and what is dependent on ω, especially when ω is implicit as is usually the case.

          Btw, I edited some latex in your post, hope it’s correct now. You have to be especially careful with < and > signs, which can get interpreted as marking HTML tags. The only foolproof way I know is to use the HTML codes &lt; and &gt;.

        6. Great !

          Now there’s nothing left unclear to me, sorry to be so long to get such elementary details.

          Thx for the latex advice and corrections.

          Best regards

  2. Hi George,

    I have another elementary question (I’m afraid) to ask about the proof of point 2 on lemma 2.

    There you prove that \mathbb{E}[Y_\tau| \mathcal{F}{\tau-}]=0 a.s. for an Integrable Martigale Y=\Delta M. Right ? [GL: Correct]

    Applying localization argument here then means that for a localizing sequence \theta_n of the now only locally integrable, local martingale Y we have :

    \mathbb{E}[Y_\tau^{\theta_n}|\mathcal{F}_{\tau-}]=0 a.s. for all \theta_n right ? [GL: Correct]

    But letting n\to \infty and noting Z_n=\mathbb{E}[Y_\tau^{\theta_n}|\mathcal{F}_{\tau-}], we can only conclude that :

    Z= \lim_{n\to \infty} Z_n= \lim_{n\to \infty} 0=0 a.s.

    This is not the statement to be proven and I miss the step that would gives the final conclusion.

    Suuch an argument would be :

    0=\lim_{n\to\infty} \mathbb{E}[Y_\tau^{\theta_n}|\mathcal{F}_{\tau-}]= \mathbb{E}[\lim_{n\to\infty} (Y_\tau^{\theta_n})|\mathcal{F}_{\tau-}]=\mathbb{E}[ Y_\tau |\mathcal{F}_{\tau-}] a.s.

    But I can’t justify properly the intertwining of Lim and Expectation operator that gives the conclusion wihtout adding extra assumptions.

    Maybe (or probably should I say) I missed something that would trivially lead to the conclusion, so would you please point me that out ?

    Best regards

    1. Rather than taking the limit directly, use the fact that \{\theta_n \ge \tau\}\in\mathcal{F}_{\tau-} to write,

      \displaystyle 1_{\{\theta_n \ge \tau\}}\mathbb{E}[Y_\tau\vert\mathcal{F}_{\tau-}]=1_{\{\theta_n \ge \tau\}}\mathbb{E}[Y^{\theta_n}_\tau\vert\mathcal{F}_{\tau-}]=0

      If the indicator function in the front of this expression is moved inside the conditional expectation, then the identity is just using 1_{\{\theta_n \ge \tau\}}Y_\tau=Y^{\theta_n}_\tau.

      Now, you can take the limit as n goes to infinity, and it doesn’t have to be commuted with the expectation at all. You have 1_{\{\tau < \infty\}}\mathbb{E}[Y_\tau\vert\mathcal{F}_{\tau-}]=0. Also, as Y is taken to be 0 at infinity, this also holds if the indicator function is changed to 1_{\{\tau = \infty\}}. So, \mathbb{E}[Y_\tau\vert\mathcal{F}_{\tau-}]=0.

      There is something that I glossed over here, and I was intending to add to the end of this post as a note (I'll do this). I take the conditional expectation without proving that Y_\tau is integrable. In fact, it need not be integrable. For a random variable Z and sub-sigma-algebra \mathcal{G}, the conditional expectation \mathbb{E}[\vert Z\vert\;\vert\mathcal{G}] is always defined, although it could be infinite. If this is almost surely finite, the \mathbb{E}[Z\vert\mathcal{G}] is well-defined, by \mathbb{E}[1_A\mathbb{E}[Z\vert\mathcal{G}]]=\mathbb{E}[1_AZ] for any A\in \mathcal{G} such that 1_AZ is integrable. Also, \mathbb{E}[\vert Z\vert\;\vert\mathcal{G}] is almost-surely finite if and only if there is a sequence A_n\in\mathcal{G} with \bigcup_nA_n=\Omega and 1_{A_n}Z are integrable. From this, you can show that \mathbb{E}[\vert X_\tau\vert\;\vert\mathcal{F}_{\tau-}] < \infty almost surely, for any locally integrable process X. So, \mathbb{E}[X_\tau\vert\mathcal{F}_{\tau-}] makes sense. I’ll append a note to this post along those lines.

      Also, I do plan on tidying up these posts and trying to incorporate comments or clarifications but, for now, at least I have your comments at the bottom of the post in case it causes anyone else any confusion. I think the jump in Lemma 2 here is too large for many people to see how this works, so I’m not surprised you picked up on it. Like I said, the original thinking was to add a note concerning conditional expectations at the end and link to it (but, I forgot).

      1. Hi George,

        Thank’s for this very neat explanation and for the additional elements.

        So here your A_n=\{\theta_n\ge \tau\} which tends to \Omega almost surely by hypothesis.

        The worst part here is that I knew this generalisation of conditional expectation. By the way it is named \sigma-Integrability with respect to \mathcal{G} in He, Wang, and Yan ‘s book “Semimartingale Theory and Stochastic Calculus” (first Chapter).

        Best regards

  3. Hi,

    Could you please give me a reference on this stuff?
    I need a reference book having results like Lemma 9. Any help will be greatly appreciated.

  4. Hi George!

    I’m currently studying Markov processes and the concept of compensators always prop up. From the first paragraph, I got the gist of the idea of compensators. But I can’t get the rationale why do we have to break stochastic processes into martingale and drift terms? What’s the purpose of splitting them up? I’m sorry if this may sound naive. Thank you.

  5. There are lots of reasons. The decomposition of stochastic processes into martingale and finite variation terms turns out to be useful in many situations, and I cannot do this idea justice in a short comment. From the purely theoretical situation for semimartingales, it is useful for integration.
    1) The drift term has finite variation, so we can integrate with respect to it. There are also different techniques to integrate with respect to a martingale and these definitions are consistent (for a finite variation martingale, the integrals coincide). So, we can split it up and use the different constructions for the martingale and drift components.
    2) The decomposition helps tell you about the distribution. Suppose you want to compute the expectation of f(X_t) for a process X. Only the drift component of f(X) contributes (subject to technical constraints). If you want the expectation of f(X)^2, then the quadratic variation term also contributes, which comes from the martingale term.
    3) For SDEs representing physical processes, the drift and martingale components often represent distinct things, with the drift component given by the contribution from the non-stochastic physical laws of the system and the martingale component coming from some external random noise. Consider the Ornstein-Uhlenbeck process representing the noisy motion of particles, with the drift term coming from resistance/friction and the martingale term coming from the random shocks of molecules hitting the particles.

    Probably lots of other examples could be given, but the decomposition is useful very often for different reasons.

Leave a Reply to Benihime Cancel reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s