Pathwise Burkholder-Davis-Gundy Inequalities

As covered earlier in my notes, the Burkholder-David-Gundy inequality relates the moments of the maximum of a local martingale M with its quadratic variation,

\displaystyle  c_p^{-1}{\mathbb E}[[M]^{p/2}_\tau]\le{\mathbb E}[\bar M_\tau^p]\le C_p{\mathbb E}[[M]^{p/2}_\tau]. (1)

Here, {\bar M_t\equiv\sup_{s\le t}\lvert M_s\rvert} is the running maximum, {[M]} is the quadratic variation, {\tau} is a stopping time, and the exponent {p} is a real number greater than or equal to 1. Then, {c_p} and {C_p} are positive constants depending on p, but independent of the choice of local martingale and stopping time. Furthermore, for continuous local martingales, which are the focus of this post, the inequality holds for all {p > 0}.

Since the quadratic variation used in my notes, by definition, starts at zero, the BDG inequality also required the local martingale to start at zero. This is not an important restriction, but it can be removed by requiring the quadratic variation to start at {[M]_0=M_0^2}. Henceforth, I will assume that this is the case, which means that if we are working with the definition in my notes then we should add {M_0^2} everywhere to the quadratic variation {[M]}.

In keeping with the theme of the previous post on Doob’s inequalities, such martingale inequalities should have pathwise versions of the form

\displaystyle  c_p^{-1}[M]^{p/2}+\int\alpha dM\le\bar M^p\le C_p[M]^{p/2}+\int\beta dM (2)

for predictable processes {\alpha,\beta}. Inequalities in this form are considerably stronger than (1), since they apply on all sample paths, not just on average. Also, we do not require M to be a local martingale — it is sufficient to be a (continuous) semimartingale. However, in the case where M is a local martingale, the pathwise version (2) does imply the BDG inequality (1), using the fact that stochastic integration preserves the local martingale property.

Lemma 1 Let X and Y be nonnegative increasing measurable processes satisfying {X\le Y-N} for a local (sub)martingale N starting from zero. Then, {{\mathbb E}[X_\tau]\le{\mathbb E}[Y_\tau]} for all stopping times {\tau}.

Proof: Let {\tau_n} be an increasing sequence of bounded stopping times increasing to infinity such that the stopped processes {N^{\tau_n}} are submartingales. Then,

\displaystyle  {\mathbb E}[1_{\{\tau_n\ge\tau\}}X_\tau]\le{\mathbb E}[X_{\tau_n\wedge\tau}]={\mathbb E}[Y_{\tau_n\wedge\tau}]-{\mathbb E}[N_{\tau_n\wedge\tau}]\le{\mathbb E}[Y_{\tau_n\wedge\tau}]\le{\mathbb E}[Y_\tau].

Letting n increase to infinity and using monotone convergence on the left hand side gives the result. ⬜

Moving on to the main statements of this post, I will mention that there are actually many different pathwise versions of the BDG inequalities. I opt for the especially simple statements given in Theorem 2 below. See the papers Pathwise Versions of the Burkholder-Davis Gundy Inequality by Bieglböck and Siorpaes, and Applications of Pathwise Burkholder-Davis-Gundy inequalities by Soirpaes, for slightly different approaches, although these papers do also effectively contain proofs of (3,4) for the special case of {r=1/2}. As usual, I am using {x\vee y} to represent the maximum of two numbers.

Theorem 2 Let X and Y be nonnegative continuous processes with {X_0=Y_0}. For any {0 < r\le1} we have,

\displaystyle  (1-r)\bar X^r\le (3-2r)\bar Y^r+r\int(\bar X\vee\bar Y)^{r-1}d(X-Y) (3)

and, if X is increasing, this can be improved to,

\displaystyle  \bar X^r\le (2-r)\bar Y^r+r\int(\bar X\vee\bar Y)^{r-1}d(X-Y). (4)

If {r\ge1} and X is increasing then,

\displaystyle  \bar X^r\le r^{r\vee 2}\,\bar Y^r+r^2\int(\bar X\vee\bar Y)^{r-1}d(X-Y). (5)

Continue reading “Pathwise Burkholder-Davis-Gundy Inequalities”

Pathwise Martingale Inequalities

Recall Doob’s inequalities, covered earlier in these notes, which bound expectations of functions of the maximum of a martingale in terms of its terminal distribution. Although these are often applied to martingales, they hold true more generally for cadlag submartingales. Here, I use {\bar X_t\equiv\sup_{s\le t}X_s} to denote the running maximum of a process.

Theorem 1 Let X be a nonnegative cadlag submartingale. Then,

  • {{\mathbb P}\left(\bar X_t \ge K\right)\le K^{-1}{\mathbb E}[X_t]} for all {K > 0}.
  • {\lVert\bar X_t\rVert_p\le (p/(p-1))\lVert X_t\rVert_p} for all {p > 1}.
  • {{\mathbb E}[\bar X_t]\le(e/(e-1)){\mathbb E}[X_t\log X_t+1]}.

In particular, for a cadlag martingale X, then {\lvert X\rvert} is a submartingale, so theorem 1 applies with {\lvert X\rvert} in place of X.

We also saw the following much stronger (sub)martingale inequality in the post on the maximum maximum of martingales with known terminal distribution.

Theorem 2 Let X be a cadlag submartingale. Then, for any real K and nonnegative real t,

\displaystyle  {\mathbb P}(\bar X_t\ge K)\le\inf_{x < K}\frac{{\mathbb E}[(X_t-x)_+]}{K-x}. (1)

This is particularly sharp, in the sense that for any distribution for {X_t}, there exists a martingale with this terminal distribution for which (1) becomes an equality simultaneously for all values of K. Furthermore, all of the inequalities stated in theorem 1 follow from (1). For example, the first one is obtained by taking {x=0} in (1). The remaining two can also be proved from (1) by integrating over K.

Note that all of the submartingale inequalities above are of the form

\displaystyle  {\mathbb E}[F(\bar X_t)]\le{\mathbb E}[G(X_t)] (2)

for certain choices of functions {F,G\colon{\mathbb R}\rightarrow{\mathbb R}^+}. The aim of this post is to show how they have a more general `pathwise’ form,

\displaystyle  F(\bar X_t)\le G(X_t) - \int_0^t\xi\,dX (3)

for some nonnegative predictable process {\xi}. It is relatively straightforward to show that (2) follows from (3) by noting that the integral is a submartingale and, hence, has nonnegative expectation. To be rigorous, there are some integrability considerations to deal with, so a proof will be included later in this post.

Inequality (3) is required to hold almost everywhere, and not just in expectation, so is a considerably stronger statement than the standard martingale inequalities. Furthermore, it is not necessary for X to be a submartingale for (3) to make sense, as it holds for all semimartingales. We can go further, and even drop the requirement that X is a semimartingale. As we will see, in the examples covered in this post, {\xi_t} will be of the form {h(\bar X_{t-})} for an increasing right-continuous function {h\colon{\mathbb R}\rightarrow{\mathbb R}}, so integration by parts can be used,

\displaystyle  \int h(\bar X_-)\,dX = h(\bar X)X-h(\bar X_0)X_0 - \int X\,dh(\bar X). (4)

The right hand side of (4) is well-defined for any cadlag real-valued process, by using the pathwise Lebesgue–Stieltjes integral with respect to the increasing process {h(\bar X)}, so can be used as the definition of {\int h(\bar X_-)dX}. In the case where X is a semimartingale, integration by parts ensures that this agrees with the stochastic integral {\int\xi\,dX}. Since we now have an interpretation of (3) in a pathwise sense for all cadlag processes X, it is no longer required to suppose that X is a submartingale, a semimartingale, or even require the existence of an underlying probability space. All that is necessary is for {t\mapsto X_t} to be a cadlag real-valued function. Hence, we reduce the martingale inequalities to straightforward results of real-analysis not requiring any probability theory and, consequently, are much more general. I state the precise pathwise generalizations of Doob’s inequalities now, leaving the proof until later in the post. As the first of inequality of theorem 1 is just the special case of (1) with {x=0}, we do not need to explicitly include this here.

Theorem 3 Let X be a cadlag process and t be a nonnegative time.

  1. For real {K > x},
    \displaystyle  1_{\{\bar X_t\ge K\}}\le\frac{(X_t-x)_+}{K-x}-\int_0^t\xi\,dX (5)

    where {\xi=(K-x)^{-1}1_{\{\bar X_-\ge K\}}}.

  2. If X is nonnegative and p,q are positive reals with {p^{-1}+q^{-1}=1} then,
    \displaystyle  \bar X_t^p\le q^p X^p_t-\int_0^t\xi dX (6)

    where {\xi=pq\bar X_-^{p-1}}.

  3. If X is nonnegative then,
    \displaystyle  \bar X_t\le\frac{e}{e-1}\left( X_t \log X_t +1\right)-\int_0^t\xi\,dX (7)

    where {\xi=\frac{e}{e-1}\log(\bar X_-\vee1)}.

Continue reading “Pathwise Martingale Inequalities”

A Process With Hidden Drift

Consider a stochastic process X of the form

\displaystyle  X_t=W_t+\int_0^t\xi_sds, (1)

for a standard Brownian motion W and predictable process {\xi}, defined with respect to a filtered probability space {(\Omega,\mathcal F,\{\mathcal F_t\}_{t\in{\mathbb R}_+},{\mathbb P})}. For this to make sense, we must assume that {\int_0^t\lvert\xi_s\rvert ds} is almost surely finite at all times, and I will suppose that {\mathcal F_\cdot} is the filtration generated by W.

The question is whether the drift {\xi} can be backed out from knowledge of the process X alone. As I will show with an example, this is not possible. In fact, in our example, X will itself be a standard Brownian motion, even though the drift {\xi} is non-trivial (that is, {\int\xi dt} is not almost surely zero). In this case X has exactly the same distribution as W, so cannot be distinguished from the driftless case with {\xi=0} by looking at the distribution of X alone.

On the face of it, this seems rather counter-intuitive. By standard semimartingale decomposition, it is known that we can always decompose

\displaystyle  X=M+A (2)

for a unique continuous local martingale M starting from zero, and unique continuous FV process A. By uniqueness, {M=W} and {A=\int\xi dt}. This allows us to back out the drift {\xi} and, in particular, if the drift is non-trivial then X cannot be a martingale. However, in the semimartingale decomposition, it is required that M is a martingale with respect to the original filtration {\mathcal F_\cdot}. If we do not know the filtration {\mathcal F_\cdot}, then it might not be possible to construct decomposition (2) from knowledge of X alone. As mentioned above, we will give an example where X is a standard Brownian motion which, in particular, means that it is a martingale under its natural filtration. By the semimartingale decomposition result, it is not possible for X to be an {\mathcal F_\cdot}-martingale. A consequence of this is that the natural filtration of X must be strictly smaller than the natural filtration of W.

The inspiration for this post was a comment by Gabe posing the following question: If we take {\mathbb F} to be the filtration generated by a standard Brownian motion W in {(\Omega,\mathcal F,{\mathbb P})}, and we define {\tilde W_t=W_t+\int_0^t\Theta_udu}, can we find an {\mathbb F}-adapted {\Theta} such that the filtration generated by {\tilde W} is smaller than {\mathbb F}? Our example gives an affirmative answer. Continue reading “A Process With Hidden Drift”

Proof of Measurable Section

I will give a proof of the measurable section theorem, also known as measurable selection. Given a complete probability space {(\Omega,\mathcal F,{\mathbb P})}, we denote the projection from {\Omega\times{\mathbb R}} by

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle\pi_\Omega\colon \Omega\times{\mathbb R}\rightarrow\Omega,\smallskip\\ &\displaystyle\pi_\Omega(\omega,t)=\omega. \end{array}

By definition, if {S\subseteq\Omega\times{\mathbb R}} then, for every {\omega\in\pi_\Omega(S)}, there exists a {t\in{\mathbb R}} such that {(\omega,t)\in S}. The measurable section theorem says that this choice can be made in a measurable way. That is, using {\mathcal B({\mathbb R})} to denote the Borel sigma-algebra, if S is in the product sigma-algebra {\mathcal F\otimes\mathcal B({\mathbb R})} then {\pi_\Omega(S)\in\mathcal F} and there is a measurable map

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle\tau\colon\pi_\Omega(S)\rightarrow{\mathbb R},\smallskip\\ &\displaystyle(\omega,\tau(\omega))\in S. \end{array}

It is convenient to extend {\tau} to the whole of {\Omega} by setting {\tau=\infty} outside of {\pi_\Omega(S)}.

measurable section
Figure 1: A section of a measurable set

We consider measurable functions {\tau\colon\Omega\rightarrow{\mathbb R}\cup\{\infty\}}. The graph of {\tau} is

\displaystyle  [\tau]=\left\{(\omega,\tau(\omega))\colon\tau(\omega)\in{\mathbb R}\right\}\subseteq\Omega\times{\mathbb R}.

The condition that {(\omega,\tau(\omega))\in S} whenever {\tau < \infty} can then be expressed by stating that {[\tau]\subseteq S}. This also ensures that {\{\tau < \infty\}} is a subset of {\pi_\Omega(S)}, and {\tau} is a section of S on the whole of {\pi_\Omega(S)} if and only if {\{\tau < \infty\}=\pi_\Omega(S)}.

The proof of the measurable section theorem will make use of the properties of analytic sets and of the Choquet capacitability theorem, as described in the previous two posts. [Note: I have since posted a more direct proof which does not involve such prerequisites.] Recall that a paving {\mathcal E} on a set X denotes, simply, a collection of subsets of X. The pair {(X,\mathcal E)} is then referred to as a paved space. Given a pair of paved spaces {(X,\mathcal E)} and {(Y,\mathcal F)}, the product paving {\mathcal E\times\mathcal F} denotes the collection of cartesian products {A\times B} for {A\in\mathcal E} and {B\in\mathcal F}, which is a paving on {X\times Y}. The notation {\mathcal E_\delta} is used for the collection of countable intersections of a paving {\mathcal E}.

We start by showing that measurable section holds in a very simple case where, for the section of a set S, its debut will suffice. The debut is the map

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle D(S)\colon\Omega\rightarrow{\mathbb R}\cup\{\pm\infty\},\smallskip\\ &\displaystyle \omega\mapsto\inf\left\{t\in{\mathbb R}\colon (\omega,t)\in S\right\}. \end{array}

We use the convention that the infimum of the empty set is {\infty}. It is not clear that {D(S)} is measurable, and we do not rely on this, although measurable projection can be used to show that it is measurable whenever S is in {\mathcal F\otimes\mathcal B({\mathbb R})}.

Lemma 1 Let {(\Omega,\mathcal F)} be a measurable space, {\mathcal K} be the collection of compact intervals in {{\mathbb R}}, and {\mathcal E} be the closure of the paving {\mathcal{F\times K}} under finite unions.



Then, the debut {D(S)} of any {S\in\mathcal E_\delta} is measurable and its graph {[D(S)]} is contained in
S.

Continue reading “Proof of Measurable Section”

Choquet’s Capacitability Theorem and Measurable Projection

In this post I will give a proof of the measurable projection theorem. Recall that this states that for a complete probability space {(\Omega,\mathcal F,{\mathbb P})} and a set S in the product sigma-algebra {\mathcal F\otimes\mathcal B({\mathbb R})}, the projection, {\pi_\Omega(S)}, of S onto {\Omega}, is in {\mathcal F}. The previous post on analytic sets made some progress towards this result. Indeed, using the definitions and results given there, it follows quickly that {\pi_\Omega(S)} is {\mathcal F}-analytic. To complete the proof of measurable projection, it is necessary to show that analytic sets are measurable. This is a consequence of Choquet’s capacitability theorem, which I will prove in this post. Measurable projection follows as a simple consequence.

The condition that the underlying probability space is complete is necessary and, if this condition was dropped, then the result would no longer hold. Recall that, if {(\Omega,\mathcal F,{\mathbb P})} is a probability space, then the completion, {\mathcal F_{\mathbb P}}, of {\mathcal F} with respect to {{\mathbb P}} consists of the sets {A\subseteq\Omega} such that there exists {B,C\in\mathcal F} with {B\subseteq A\subseteq C} and {{\mathbb P}(B)={\mathbb P}(C)}. The probability space is complete if {\mathcal F_{\mathbb P}=\mathcal F}. More generally, {{\mathbb P}} can be uniquely extended to a measure {\bar{\mathbb P}} on the sigma-algebra {\mathcal F_{\mathbb P}} by setting {\bar{\mathbb P}(A)={\mathbb P}(B)={\mathbb P}(C)}, where B and C are as above. Then {(\Omega,\mathcal F_{\mathbb P},\bar{\mathbb P})} is the completion of {(\Omega,\mathcal F,{\mathbb P})}.

In measurable projection, then, it needs to be shown that if {A\subseteq\Omega} is the projection of a set in {\mathcal F\otimes\mathcal B({\mathbb R})}, then A is in the completion of {\mathcal F}. That is, we need to find sets {B,C\in\mathcal F} with {B\subseteq A\subseteq C} with {{\mathbb P}(B)={\mathbb P}(C)}. In fact, it is always possible to find a {C\supseteq A} in {\mathcal F} which minimises {{\mathbb P}(C)}, and its measure is referred to as the outer measure of A. For any probability measure {{\mathbb P}}, we can define an outer measure on the subsets of {\Omega}, {{\mathbb P}^*\colon\mathcal P(\Omega)\rightarrow{\mathbb R}^+} by approximating {A\subseteq\Omega} from above,

\displaystyle  {\mathbb P}^*(A)\equiv\inf\left\{{\mathbb P}(B)\colon B\in\mathcal F, A\subseteq B\right\}. (1)

Similarly, we can define an inner measure by approximating A from below,

\displaystyle  {\mathbb P}_*(A)\equiv\sup\left\{{\mathbb P}(B)\colon B\in\mathcal F, B\subseteq A\right\}.

It can be shown that A is {\mathcal F}-measurable if and only if {{\mathbb P}_*(A)={\mathbb P}^*(A)}. We will be concerned primarily with the outer measure {{\mathbb P}^*}, and will show that that if A is the projection of some {S\in\mathcal F\otimes\mathcal B({\mathbb R})}, then A can be approximated from below in the following sense: there exists {B\subseteq A} in {\mathcal F} for which {{\mathbb P}^*(B)={\mathbb P}^*(A)}. From this, it will follow that A is in the completion of {\mathcal F}.

It is convenient to prove the capacitability theorem in slightly greater generality than just for the outer measure {{\mathbb P}^*}. The only properties of {{\mathbb P}^*} that are required is that it is a capacity, which we now define. Recall that a paving {\mathcal E} on a set X is simply any collection of subsets of X, and we refer to the pair {(X,\mathcal E)} as a paved space.

Definition 1 Let {(X,\mathcal E)} be a paved space. Then, an {\mathcal E}-capacity is a map {I\colon\mathcal P(X)\rightarrow{\mathbb R}} which is increasing, continuous along increasing sequences, and continuous along decreasing sequences in {\mathcal E}. That is,

  • if {A\subseteq B} then {I(A)\le I(B)}.
  • if {A_n\subseteq X} is increasing in n then {I(A_n)\rightarrow I(\bigcup_nA_n)} as {n\rightarrow\infty}.
  • if {A_n\in\mathcal E} is decreasing in n then {I(A_n)\rightarrow I(\bigcap_nA_n)} as {n\rightarrow\infty}.

As was claimed above, the outer measure {{\mathbb P}^*} defined by (1) is indeed a capacity.

Lemma 2 Let {(\Omega,\mathcal F,{\mathbb P})} be a probability space. Then,

  • {{\mathbb P}^*(A)={\mathbb P}(A)} for all {A\in\mathcal F}.
  • For all {A\subseteq\Omega}, there exists a {B\in\mathcal F} with {A\subseteq B} and {{\mathbb P}^*(A)={\mathbb P}(B)}.
  • {{\mathbb P}^*} is an {\mathcal F}-capacity.

Continue reading “Choquet’s Capacitability Theorem and Measurable Projection”

Analytic Sets

We will shortly give a proof of measurable projection and, also, of the section theorems. Starting with the projection theorem, recall that this states that if {(\Omega,\mathcal F,{\mathbb P})} is a complete probability space, then the projection of any measurable subset of {\Omega\times{\mathbb R}} onto {\Omega} is measurable. To be precise, the condition is that S is in the product sigma-algebra {\mathcal{F}\otimes\mathcal B({\mathbb R})}, where {\mathcal B({\mathbb R})} denotes the Borel sets in {{\mathbb R}}, and {\pi\colon\Omega\times{\mathbb R}\rightarrow\Omega} is the projection {\pi(\omega,t)=\omega}. Then, {\pi(S)\in\mathcal{F}}. Although it looks like a very basic property of measurable sets, maybe even obvious, measurable projection is a surprisingly difficult result to prove. In fact, the requirement that the probability space is complete is necessary and, if it is dropped, then {\pi(S)} need not be measurable. Counterexamples exist for commonly used measurable spaces such as {\Omega= {\mathbb R}} and {\mathcal F=\mathcal B({\mathbb R})}. This suggests that there is something deeper going on here than basic manipulations of measurable sets.

The techniques which will be used to prove the projection theorem involve analytic sets, which will be introduced in this post, with the proof of measurable projection to follow in the next post. [Note: I have since posted a more direct proof of measurable projection and section, which does not make use of analytic sets.] These results can also be used to prove the optional and predictable section theorems which, at first appearances, seem to be quite basic statements. The section theorems are fundamental to the powerful and interesting theory of optional and predictable projection which is, consequently, generally considered to be a hard part of stochastic calculus. In fact, the projection and section theorems are really not that hard to prove, although the method given here does require stepping outside of the usual setup used in probability and involves something more like descriptive set theory. Continue reading “Analytic Sets”

Do Convex and Decreasing Functions Preserve the Semimartingale Property — A Possible Counterexample

f(t,x)
Figure 1: The function f, convex in x and decreasing in t

Here, I attempt to construct a counterexample to the hypotheses of the earlier post, Do convex and decreasing functions preserve the semimartingale property? There, it was asked, for any semimartingale X and function {f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}} such that {f(t,x)} is convex in x and right-continuous and decreasing in t, is {f(t,X_t)} necessarily a semimartingale? It was explained how this is equivalent to the hypothesis: for any function {f\colon[0,1]^2\rightarrow{\mathbb R}} such that {f(t,x)} is convex and Lipschitz continuous in x and decreasing in t, does it decompose as {f=g-h} where {g(t,x)} and {h(t,x)} are convex in x and increasing in t. This is the form of the hypothesis which this post will be concerned with, so the example will only involve simple real analysis and no stochastic calculus. I will give some numerical calculations suggesting that the construction below is a counterexample, but do not have any proof of this. So, the hypothesis is still open.

Although the construction given here will be self-contained, it is worth noting that it is connected to the example of a martingale which moves along a deterministic path. If {\{M_t\}_{t\in[0,1]}} is the martingale constructed there, then

\displaystyle  C(t,x)={\mathbb E}[(M_t-x)_+]

defines a function from {[0,1]\times[-1,1]} to {{\mathbb R}} which is convex in x and increasing in t. The question is then whether C can be expressed as the difference of functions which are convex in x and decreasing in t. The example constructed in this post will be the same as C with the time direction reversed, and with a linear function of x added so that it is zero at {x=\pm1}. Continue reading “Do Convex and Decreasing Functions Preserve the Semimartingale Property — A Possible Counterexample”

A Martingale Which Moves Along a Deterministic Path

Sample Paths
Figure 1: Sample paths

In this post I will construct a continuous and non-constant martingale M which only varies on the path of a deterministic function {f\colon{\mathbb R}_+\rightarrow{\mathbb R}}. That is, {M_t=f(t)} at all times outside of the set of nontrivial intervals on which M is constant. Expressed in terms of the stochastic integral, {dM_t=0} on the set {\{t\colon M_t\not=f(t)\}} and,

\displaystyle  M_t = \int_0^t 1_{\{M_s=f(s)\}}\,dM_s. (1)

In the example given here, f will be right-continuous. Examples with continuous f do exist, although the constructions I know of are considerably more complicated. At first sight, these properties appear to contradict what we know about continuous martingales. They vary unpredictably, behaving completely unlike any deterministic function. It is certainly the case that we cannot have {M_t=f(t)} across any interval on which M is not constant.

By a stochastic time-change, any Brownian motion B can be transformed to have the same distribution as M. This means that there exists an increasing and right-continuous process A adapted to the same filtration as B and such that {B_t=M_{A_t}} where M is a martingale as above. From this, we can infer that

\displaystyle  B_t=f(A_t),

expressing Brownian motion as a function of an increasing process. Continue reading “A Martingale Which Moves Along a Deterministic Path”

Do Convex and Decreasing Functions Preserve the Semimartingale Property?

Some years ago, I spent considerable effort trying to prove the hypothesis below. After failing at this, I spent time trying to find a counterexample, but also with no success. I did post this as a question on mathoverflow, but it has so far received no conclusive answers. So, as far as I am aware, the following statement remains unproven either way.

Hypothesis H1 Let {f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}} be such that {f(t,x)} is convex in x and right-continuous and decreasing in t. Then, for any semimartingale X, {f(t,X_t)} is a semimartingale.

It is well known that convex functions of semimartingales are themselves semimartingales. See, for example, the Ito-Tanaka formula. More generally, if {f(t,x)} was increasing in t rather than decreasing, then it can be shown without much difficulty that {f(t,X_t)} is a semimartingale. Consider decomposing {f(t,X_t)} as

\displaystyle  f(t,X_t)=\int_0^tf_x(s,X_{s-})\,dX_s+V_t, (1)

for some process V. By convexity, the right hand derivative of {f(t,x)} with respect to x always exists, and I am denoting this by {f_x}. In the case where f is twice continuously differentiable then the process V is given by Ito’s formula which, in particular, shows that it is a finite variation process. If {f(t,x)} is convex in x and increasing in t, then the terms in Ito’s formula for V are all increasing and, so, it is an increasing process. By taking limits of smooth functions, it follows that V is increasing even when the differentiability constraints are dropped, so {f(t,X_t)} is a semimartingale. Now, returning to the case where {f(t,x)} is decreasing in t, Ito’s formula is only able to say that V is of finite variation, and is generally not monotonic. As limits of finite variation processes need not be of finite variation themselves, this does not say anything about the case when f is not assumed to be differentiable, and does not help us to determine whether or not {f(t,X_t)} is a semimartingale.

Hypothesis H1 can be weakened by restricting to continuous functions of continuous martingales.

Hypothesis H2 Let {f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}} be such that {f(t,x)} is convex in x and continuous and decreasing in t. Then, for any continuous martingale X, {f(t,X_t)} is a semimartingale.

As continuous martingales are special cases of semimartingales, hypothesis H1 implies H2. In fact, the reverse implication also holds so that hypotheses H1 and H2 are equivalent.

Hypotheses H1 and H2 can also be recast as a simple real analysis statement which makes no reference to stochastic processes.

Hypothesis H3 Let {f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}} be such that {f(t,x)} is convex in x and decreasing in t. Then, {f=g-h} where {g(t,x)} and {h(t,x)} are convex in x and increasing in t.

Continue reading “Do Convex and Decreasing Functions Preserve the Semimartingale Property?”

Failure of the Martingale Property For Stochastic Integration

If X is a cadlag martingale and {\xi} is a uniformly bounded predictable process, then is the integral

\displaystyle  Y=\int\xi\,dX (1)

a martingale? If {\xi} is elementary this is one of most basic properties of martingales. If X is a square integrable martingale, then so is Y. More generally, if X is an {L^p}-integrable martingale, any {p > 1}, then so is Y. Furthermore, integrability of the maximum {\sup_{s\le t}\lvert X_s\rvert} is enough to guarantee that Y is a martingale. Also, it is a fundamental result of stochastic integration that Y is at least a local martingale and, for this to be true, it is only necessary for X to be a local martingale and {\xi} to be locally bounded. In the general situation for cadlag martingales X and bounded predictable {\xi}, it need not be the case that Y is a martingale. In this post I will construct an example showing that Y can fail to be a martingale. Continue reading “Failure of the Martingale Property For Stochastic Integration”