Local Time Continuity

Local time surface
Figure 1: Brownian motion and its local time surface

The local time of a semimartingale at a level x is a continuous increasing process, giving a measure of the amount of time that the process spends at the given level. As the definition involves stochastic integrals, it was only defined up to probability one. This can cause issues if we want to simultaneously consider local times at all levels. As x can be any real number, it can take uncountably many values and, as a union of uncountably many zero probability sets can have positive measure or, even, be unmeasurable, this is not sufficient to determine the entire local time ‘surface’

\displaystyle  (t,x)\mapsto L^x_t(\omega)

for almost all {\omega\in\Omega}. This is the common issue of choosing good versions of processes. In this case, we already have a continuous version in the time index but, as yet, have not constructed a good version jointly in the time and level. This issue arose in the post on the Ito–Tanaka–Meyer formula, for which we needed to choose a version which is jointly measurable. Although that was sufficient there, joint measurability is still not enough to uniquely determine the full set of local times, up to probability one. The ideal situation is when a version exists which is jointly continuous in both time and level, in which case we should work with this choice. This is always possible for continuous local martingales.

Theorem 1 Let X be a continuous local martingale. Then, the local times

\displaystyle  (t,x)\mapsto L^x_t

have a modification which is jointly continuous in x and t. Furthermore, this is almost surely {\gamma}-Hölder continuous w.r.t. x, for all {\gamma < 1/2} and over all bounded regions for t.

Continue reading “Local Time Continuity”

The Ito-Tanaka-Meyer Formula

Ito’s lemma is one of the most important and useful results in the theory of stochastic calculus. This is a stochastic generalization of the chain rule, or change of variables formula, and differs from the classical deterministic formulas by the presence of a quadratic variation term. One drawback which can limit the applicability of Ito’s lemma in some situations, is that it only applies for twice continuously differentiable functions. However, the quadratic variation term can alternatively be expressed using local times, which relaxes the differentiability requirement. This generalization of Ito’s lemma was derived by Tanaka and Meyer, and applies to one dimensional semimartingales.

The local time of a stochastic process X at a fixed level x can be written, very informally, as an integral of a Dirac delta function with respect to the continuous part of the quadratic variation {[X]^{c}},

\displaystyle  L^x_t=\int_0^t\delta(X-x)d[X]^c. (1)

This was explained in an earlier post. As the Dirac delta is only a distribution, and not a true function, equation (1) is not really a well-defined mathematical expression. However, as we saw, with some manipulation a valid expression can be obtained which defines the local time whenever X is a semimartingale.

Going in a slightly different direction, we can try multiplying (1) by a bounded measurable function {f(x)} and integrating over x. Commuting the order of integration on the right hand side, and applying the defining property of the delta function, that {\int f(X-x)\delta(x)dx} is equal to {f(X)}, gives

\displaystyle  \int_{-\infty}^{\infty} L^x_t f(x)dx=\int_0^tf(X)d[X]^c. (2)

By eliminating the delta function, the right hand side has been transformed into a well-defined expression. In fact, it is now the left side of the identity that is a problem, since the local time was only defined up to probability one at each level x. Ignoring this issue for the moment, recall the version of Ito’s lemma for general non-continuous semimartingales,

\displaystyle  \begin{aligned} f(X_t)=& f(X_0)+\int_0^t f^{\prime}(X_-)dX+\frac12A_t\\ &\quad+\sum_{s\le t}\left(\Delta f(X_s)-f^\prime(X_{s-})\Delta X_s\right). \end{aligned} (3)

where {A_t=\int_0^t f^{\prime\prime}(X)d[X]^c}. Equation (2) allows us to express this quadratic variation term using local times,

\displaystyle  A_t=\int_{-\infty}^{\infty} L^x_t f^{\prime\prime}(x)dx.

The benefit of this form is that, even though it still uses the second derivative of {f}, it is only really necessary for this to exist in a weaker, measure theoretic, sense. Suppose that {f} is convex, or a linear combination of convex functions. Then, its right-hand derivative {f^\prime(x+)} exists, and is itself of locally finite variation. Hence, the Stieltjes integral {\int L^xdf^\prime(x+)} exists. The infinitesimal {df^\prime(x+)} is alternatively written {f^{\prime\prime}(dx)} and, in the twice continuously differentiable case, equals {f^{\prime\prime}(x)dx}. Then,

\displaystyle  A_t=\int _{-\infty}^{\infty} L^x_t f^{\prime\prime}(dx). (4)

Using this expression in (3) gives the Ito-Tanaka-Meyer formula. Continue reading “The Ito-Tanaka-Meyer Formula”

The Stochastic Fubini Theorem

Fubini’s theorem states that, subject to precise conditions, it is possible to switch the order of integration when computing double integrals. In the theory of stochastic calculus, we also encounter double integrals and would like to be able to commute their order. However, since these can involve stochastic integration rather than the usual deterministic case, the classical results are not always applicable. To help with such cases, we could do with a new stochastic version of Fubini’s theorem. Here, I will consider the situation where one integral is of the standard kind with respect to a finite measure, and the other is stochastic. To start, recall the classical Fubini theorem.

Theorem 1 (Fubini) Let {(E,\mathcal E,\mu)} and {(F,\mathcal F,\nu)} be finite measure spaces, and {f\colon E\times F\rightarrow{\mathbb R}} be a bounded {\mathcal E\otimes\mathcal F}-measurable function. Then,

\displaystyle  y\mapsto\int f(x,y)d\mu(x)

is {\mathcal F}-measurable,

\displaystyle  x\mapsto\int f(x,y)d\nu(y)

is {\mathcal E}-measurable, and,

\displaystyle  \int\int f(x,y)d\mu(x)d\nu(y)=\int\int f(x,y)d\nu(x)d\mu(y). (1)

Continue reading “The Stochastic Fubini Theorem”

Pathwise Martingale Inequalities

Recall Doob’s inequalities, covered earlier in these notes, which bound expectations of functions of the maximum of a martingale in terms of its terminal distribution. Although these are often applied to martingales, they hold true more generally for cadlag submartingales. Here, I use {\bar X_t\equiv\sup_{s\le t}X_s} to denote the running maximum of a process.

Theorem 1 Let X be a nonnegative cadlag submartingale. Then,

  • {{\mathbb P}\left(\bar X_t \ge K\right)\le K^{-1}{\mathbb E}[X_t]} for all {K > 0}.
  • {\lVert\bar X_t\rVert_p\le (p/(p-1))\lVert X_t\rVert_p} for all {p > 1}.
  • {{\mathbb E}[\bar X_t]\le(e/(e-1)){\mathbb E}[X_t\log X_t+1]}.

In particular, for a cadlag martingale X, then {\lvert X\rvert} is a submartingale, so theorem 1 applies with {\lvert X\rvert} in place of X.

We also saw the following much stronger (sub)martingale inequality in the post on the maximum maximum of martingales with known terminal distribution.

Theorem 2 Let X be a cadlag submartingale. Then, for any real K and nonnegative real t,

\displaystyle  {\mathbb P}(\bar X_t\ge K)\le\inf_{x < K}\frac{{\mathbb E}[(X_t-x)_+]}{K-x}. (1)

This is particularly sharp, in the sense that for any distribution for {X_t}, there exists a martingale with this terminal distribution for which (1) becomes an equality simultaneously for all values of K. Furthermore, all of the inequalities stated in theorem 1 follow from (1). For example, the first one is obtained by taking {x=0} in (1). The remaining two can also be proved from (1) by integrating over K.

Note that all of the submartingale inequalities above are of the form

\displaystyle  {\mathbb E}[F(\bar X_t)]\le{\mathbb E}[G(X_t)] (2)

for certain choices of functions {F,G\colon{\mathbb R}\rightarrow{\mathbb R}^+}. The aim of this post is to show how they have a more general `pathwise’ form,

\displaystyle  F(\bar X_t)\le G(X_t) - \int_0^t\xi\,dX (3)

for some nonnegative predictable process {\xi}. It is relatively straightforward to show that (2) follows from (3) by noting that the integral is a submartingale and, hence, has nonnegative expectation. To be rigorous, there are some integrability considerations to deal with, so a proof will be included later in this post.

Inequality (3) is required to hold almost everywhere, and not just in expectation, so is a considerably stronger statement than the standard martingale inequalities. Furthermore, it is not necessary for X to be a submartingale for (3) to make sense, as it holds for all semimartingales. We can go further, and even drop the requirement that X is a semimartingale. As we will see, in the examples covered in this post, {\xi_t} will be of the form {h(\bar X_{t-})} for an increasing right-continuous function {h\colon{\mathbb R}\rightarrow{\mathbb R}}, so integration by parts can be used,

\displaystyle  \int h(\bar X_-)\,dX = h(\bar X)X-h(\bar X_0)X_0 - \int X\,dh(\bar X). (4)

The right hand side of (4) is well-defined for any cadlag real-valued process, by using the pathwise Lebesgue–Stieltjes integral with respect to the increasing process {h(\bar X)}, so can be used as the definition of {\int h(\bar X_-)dX}. In the case where X is a semimartingale, integration by parts ensures that this agrees with the stochastic integral {\int\xi\,dX}. Since we now have an interpretation of (3) in a pathwise sense for all cadlag processes X, it is no longer required to suppose that X is a submartingale, a semimartingale, or even require the existence of an underlying probability space. All that is necessary is for {t\mapsto X_t} to be a cadlag real-valued function. Hence, we reduce the martingale inequalities to straightforward results of real-analysis not requiring any probability theory and, consequently, are much more general. I state the precise pathwise generalizations of Doob’s inequalities now, leaving the proof until later in the post. As the first of inequality of theorem 1 is just the special case of (1) with {x=0}, we do not need to explicitly include this here.

Theorem 3 Let X be a cadlag process and t be a nonnegative time.

  1. For real {K > x},
    \displaystyle  1_{\{\bar X_t\ge K\}}\le\frac{(X_t-x)_+}{K-x}-\int_0^t\xi\,dX (5)

    where {\xi=(K-x)^{-1}1_{\{\bar X_-\ge K\}}}.

  2. If X is nonnegative and p,q are positive reals with {p^{-1}+q^{-1}=1} then,
    \displaystyle  \bar X_t^p\le q^p X^p_t-\int_0^t\xi dX (6)

    where {\xi=pq\bar X_-^{p-1}}.

  3. If X is nonnegative then,
    \displaystyle  \bar X_t\le\frac{e}{e-1}\left( X_t \log X_t +1\right)-\int_0^t\xi\,dX (7)

    where {\xi=\frac{e}{e-1}\log(\bar X_-\vee1)}.

Continue reading “Pathwise Martingale Inequalities”

Semimartingale Local Times

Figure 1: Brownian motion B with local time L and auxiliary Brownian motion W

For a stochastic process X taking values in a state space E, its local time at a point {x\in E} is a measure of the time spent at x. For a continuous time stochastic process, we could try and simply compute the Lebesgue measure of the time at the level,

\displaystyle  L^x_t=\int_0^t1_{\{X_s=x\}}ds. (1)

For processes which hit the level {x} and stick there for some time, this makes some sense. However, if X is a standard Brownian motion, it will always give zero, so is not helpful. Even though X will hit every real value infinitely often, continuity of the normal distribution gives {{\mathbb P}(X_s=x)=0} at each positive time, so that that {L^x_t} defined by (1) will have zero expectation.

Rather than the indicator function of {\{X=x\}} as in (1), an alternative is to use the Dirac delta function,

\displaystyle  L^x_t=\int_0^t\delta(X_s-x)\,ds. (2)

Unfortunately, the Dirac delta is not a true function, it is a distribution, so (2) is not a well-defined expression. However, if it can be made rigorous, then it does seem to have some of the properties we would want. For example, the expectation {{\mathbb E}[\delta(X_s-x)]} can be interpreted as the probability density of {X_s} evaluated at {x}, which has a positive and finite value, so it should lead to positive and finite local times. Equation (2) still relies on the Lebesgue measure over the time index, so will not behave as we may expect under time changes, and will not make sense for processes without a continuous probability density. A better approach is to integrate with respect to the quadratic variation,

\displaystyle  L^x_t=\int_0^t\delta(X_s-x)d[X]_s (3)

which, for Brownian motion, amounts to the same thing. Although (3) is still not a well-defined expression, since it still involves the Dirac delta, the idea is to come up with a definition which amounts to the same thing in spirit. Important properties that it should satisfy are that it is an adapted, continuous and increasing process with increments supported on the set {\{X=x\}},

\displaystyle  L^x_t=\int_0^t1_{\{X_s=x\}}dL^x_s.

Local times are a very useful and interesting part of stochastic calculus, and finds important applications to excursion theory, stochastic integration and stochastic differential equations. However, I have not covered this subject in my notes, so do this now. Recalling Ito’s lemma for a function {f(X)} of a semimartingale X, this involves a term of the form {\int f^{\prime\prime}(X)d[X]} and, hence, requires {f} to be twice differentiable. If we were to try to apply the Ito formula for functions which are not twice differentiable, then {f^{\prime\prime}} can be understood in terms of distributions, and delta functions can appear, which brings local times into the picture. In the opposite direction, which I take in this post, we can try to generalise Ito’s formula and invert this to give a meaning to (3). Continue reading “Semimartingale Local Times”

Do Convex and Decreasing Functions Preserve the Semimartingale Property — A Possible Counterexample

f(t,x)
Figure 1: The function f, convex in x and decreasing in t

Here, I attempt to construct a counterexample to the hypotheses of the earlier post, Do convex and decreasing functions preserve the semimartingale property? There, it was asked, for any semimartingale X and function {f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}} such that {f(t,x)} is convex in x and right-continuous and decreasing in t, is {f(t,X_t)} necessarily a semimartingale? It was explained how this is equivalent to the hypothesis: for any function {f\colon[0,1]^2\rightarrow{\mathbb R}} such that {f(t,x)} is convex and Lipschitz continuous in x and decreasing in t, does it decompose as {f=g-h} where {g(t,x)} and {h(t,x)} are convex in x and increasing in t. This is the form of the hypothesis which this post will be concerned with, so the example will only involve simple real analysis and no stochastic calculus. I will give some numerical calculations suggesting that the construction below is a counterexample, but do not have any proof of this. So, the hypothesis is still open.

Although the construction given here will be self-contained, it is worth noting that it is connected to the example of a martingale which moves along a deterministic path. If {\{M_t\}_{t\in[0,1]}} is the martingale constructed there, then

\displaystyle  C(t,x)={\mathbb E}[(M_t-x)_+]

defines a function from {[0,1]\times[-1,1]} to {{\mathbb R}} which is convex in x and increasing in t. The question is then whether C can be expressed as the difference of functions which are convex in x and decreasing in t. The example constructed in this post will be the same as C with the time direction reversed, and with a linear function of x added so that it is zero at {x=\pm1}. Continue reading “Do Convex and Decreasing Functions Preserve the Semimartingale Property — A Possible Counterexample”

Do Convex and Decreasing Functions Preserve the Semimartingale Property?

Some years ago, I spent considerable effort trying to prove the hypothesis below. After failing at this, I spent time trying to find a counterexample, but also with no success. I did post this as a question on mathoverflow, but it has so far received no conclusive answers. So, as far as I am aware, the following statement remains unproven either way.

Hypothesis H1 Let {f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}} be such that {f(t,x)} is convex in x and right-continuous and decreasing in t. Then, for any semimartingale X, {f(t,X_t)} is a semimartingale.

It is well known that convex functions of semimartingales are themselves semimartingales. See, for example, the Ito-Tanaka formula. More generally, if {f(t,x)} was increasing in t rather than decreasing, then it can be shown without much difficulty that {f(t,X_t)} is a semimartingale. Consider decomposing {f(t,X_t)} as

\displaystyle  f(t,X_t)=\int_0^tf_x(s,X_{s-})\,dX_s+V_t, (1)

for some process V. By convexity, the right hand derivative of {f(t,x)} with respect to x always exists, and I am denoting this by {f_x}. In the case where f is twice continuously differentiable then the process V is given by Ito’s formula which, in particular, shows that it is a finite variation process. If {f(t,x)} is convex in x and increasing in t, then the terms in Ito’s formula for V are all increasing and, so, it is an increasing process. By taking limits of smooth functions, it follows that V is increasing even when the differentiability constraints are dropped, so {f(t,X_t)} is a semimartingale. Now, returning to the case where {f(t,x)} is decreasing in t, Ito’s formula is only able to say that V is of finite variation, and is generally not monotonic. As limits of finite variation processes need not be of finite variation themselves, this does not say anything about the case when f is not assumed to be differentiable, and does not help us to determine whether or not {f(t,X_t)} is a semimartingale.

Hypothesis H1 can be weakened by restricting to continuous functions of continuous martingales.

Hypothesis H2 Let {f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}} be such that {f(t,x)} is convex in x and continuous and decreasing in t. Then, for any continuous martingale X, {f(t,X_t)} is a semimartingale.

As continuous martingales are special cases of semimartingales, hypothesis H1 implies H2. In fact, the reverse implication also holds so that hypotheses H1 and H2 are equivalent.

Hypotheses H1 and H2 can also be recast as a simple real analysis statement which makes no reference to stochastic processes.

Hypothesis H3 Let {f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}} be such that {f(t,x)} is convex in x and decreasing in t. Then, {f=g-h} where {g(t,x)} and {h(t,x)} are convex in x and increasing in t.

Continue reading “Do Convex and Decreasing Functions Preserve the Semimartingale Property?”

Purely Discontinuous Semimartingales

As stated by the Bichteler-Dellacherie theorem, all semimartingales can be decomposed as the sum of a local martingale and an FV process. However, as the terms are only determined up to the addition of an FV local martingale, this decomposition is not unique. In the case of continuous semimartingales, we do obtain uniqueness, by requiring the terms in the decomposition to also be continuous. Furthermore, the decomposition into continuous terms is preserved by stochastic integration. Looking at non-continuous processes, there does exist a unique decomposition into local martingale and predictable FV processes, so long as we impose the slight restriction that the semimartingale is locally integrable.

In this post, I look at another decomposition which holds for all semimartingales and, moreover, is uniquely determined. This is the decomposition into continuous local martingale and purely discontinuous terms which, as we will see, is preserved by the stochastic integral. This is distinct from each of the decompositions mentioned above, except for the case of continuous semimartingales, in which case it coincides with the sum of continuous local martingale and FV components. Before proving the decomposition, I will start by describing the class of purely discontinuous semimartingales which, although they need not have finite variation, do have many of the properties of FV processes. In fact, they comprise precisely of the closure of the set of FV processes under the semimartingale topology. The terminology can be a bit confusing, and it should be noted that purely discontinuous processes need not actually have any discontinuities. For example, all continuous FV processes are purely discontinuous. For this reason, the term `quadratic pure jump semimartingale’ is sometimes used instead, referring to the fact that their quadratic variation is a pure jump process. Recall that quadratic variations and covariations can be written as the sum of continuous and pure jump parts,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle [X]_t&\displaystyle=[X]^c_t+\sum_{s\le t}(\Delta X_s)^2,\smallskip\\ \displaystyle [X,Y]_t&\displaystyle=[X,Y]^c_t+\sum_{s\le t}\Delta X_s\Delta Y_s. \end{array} (1)

The statement that the quadratic variation is a pure jump process is equivalent to saying that its continuous part, {[X]^c}, is zero. As the only difference between the generalized Ito formula for semimartingales and for FV processes is in the terms involving continuous parts of the quadratic variations and covariations, purely discontinuous semimartingales behave much like FV processes under changes of variables and integration by parts. Yet another characterisation of purely discontinuous semimartingales is as sums of purely discontinuous local martingales — which were studied in the previous post — and of FV processes.

Rather than starting by choosing one specific property to use as the definition, I prove the equivalence of various statements, any of which can be taken to define the purely discontinuous semimartingales.

Theorem 1 For a semimartingale X, the following are equivalent.

  1. {[X]^c=0}.
  2. {[X,Y]^c=0} for all semimartingales Y.
  3. {[X,Y]=0} for all continuous semimartingales Y.
  4. {[X,M]=0} for all continuous local martingales M.
  5. {X=M+V} for a purely discontinuous local martingale M and FV process V.
  6. there exists a sequence {\{X^n\}_{n=1,2,\ldots}} of FV processes such that {X^n\rightarrow X} in the semimartingale topology.

Continue reading “Purely Discontinuous Semimartingales”

Properties of Quasimartingales

The previous two posts introduced the concept of quasimartingales, and noted that they can be considered as a generalization of submartingales and supermartingales. In this post we prove various basic properties of quasimartingales and of the mean variation, extending results of martingale theory to this situation.

We start with a version of optional stopping which applies for quasimartingales. For now, we just consider simple stopping times, which are stopping times taking values in a finite subset of the nonnegative extended reals {\bar{\mathbb R}_+=[0,\infty]}. Stopping a process can only decrease its mean variation (recall the alternative definitions {{\rm Var}} and {{\rm Var}^*} for the mean variation). For example, a process X is a martingale if and only if {{\rm Var}(X)=0}, so in this case the following result says that stopped martingales are martingales.

Lemma 1 Let X be an adapted process and {\tau} be a simple stopping time. Then

\displaystyle  {\rm Var}^*(X^\tau)\le{\rm Var}^*(X). (1)

Assuming, furthermore, that X is integrable,

\displaystyle  {\rm Var}(X^\tau)\le{\rm Var}(X). (2)

and, more precisely,

\displaystyle  {\rm Var}(X)={\rm Var}(X^\tau)+{\rm Var}(X-X^\tau) (3)

Continue reading “Properties of Quasimartingales”

Compensators

A very common technique when looking at general stochastic processes is to break them down into separate martingale and drift terms. This is easiest to describe in the discrete time situation. So, suppose that {\{X_n\}_{n=0,1,\ldots}} is a stochastic process adapted to the discrete-time filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_n\}_{n=0,1,\ldots},{\mathbb P})}. If X is integrable, then it is possible to decompose it into the sum of a martingale M and a process A, starting from zero, and such that {A_n} is {\mathcal{F}_{n-1}}-measurable for each {n\ge1}. That is, A is a predictable process. The martingale condition on M enforces the identity

\displaystyle  A_n-A_{n-1}={\mathbb E}[A_n-A_{n-1}\vert\mathcal{F}_{n-1}]={\mathbb E}[X_n-X_{n-1}\vert\mathcal{F}_{n-1}].

So, A is uniquely defined by

\displaystyle  A_n=\sum_{k=1}^n{\mathbb E}\left[X_k-X_{k-1}\vert\mathcal{F}_{k-1}\right], (1)

and is referred to as the compensator of X. This is just the predictable term in the Doob decomposition described at the start of the previous post.

In continuous time, where we work with respect to a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})}, the situation is much more complicated. There is no simple explicit formula such as (1) for the compensator of a process. Instead, it is defined as follows.

Definition 1 The compensator of a cadlag adapted process X is a predictable FV process A, with {A_0=0}, such that {X-A} is a local martingale.

For an arbitrary process, there is no guarantee that a compensator exists. From the previous post, however, we know exactly when it does. The processes for which a compensator exists are precisely the special semimartingales or, equivalently, the locally integrable semimartingales. Furthermore, if it exists, then the compensator is uniquely defined up to evanescence. Definition 1 is considerably different from equation (1) describing the discrete-time case. However, we will show that, at least for processes with integrable variation, the continuous-time definition does follow from the limit of discrete time compensators calculated along ever finer partitions (see below).

Although we know that compensators exist for all locally integrable semimartingales, the notion is often defined and used specifically for the case of adapted processes with locally integrable variation or, even, just integrable increasing processes. As with all FV processes, these are semimartingales, with stochastic integration for locally bounded integrands coinciding with Lebesgue-Stieltjes integration along the sample paths. As an example, consider a homogeneous Poisson process X with rate {\lambda}. The compensated Poisson process {M_t=X_t-\lambda t} is a martingale. So, X has compensator {\lambda t}.

We start by describing the jumps of the compensator, which can be done simply in terms of the jumps of the original process. Recall that the set of jump times {\{t\colon\Delta X_t\not=0\}} of a cadlag process are contained in the graphs of a sequence of stopping times, each of which is either predictable or totally inaccessible. We, therefore, only need to calculate {\Delta A_\tau} separately for the cases where {\tau} is a predictable stopping time and when it is totally inaccessible.

For the remainder of this post, it is assumed that the underlying filtered probability space is complete. Whenever we refer to the compensator of a process X, it will be understood that X is a special semimartingale. Also, the jump {\Delta X_t} of a process is defined to be zero at time {t=\infty}.

Lemma 2 Let A be the compensator of a process X. Then, for a stopping time {\tau},

  1. {\Delta A_\tau=0} if {\tau} is totally inaccessible.
  2. {\Delta A_\tau={\mathbb E}\left[\Delta X_\tau\vert\mathcal{F}_{\tau-}\right]} if {\tau} is predictable.

Continue reading “Compensators”