# Pathwise Burkholder-Davis-Gundy Inequalities

As covered earlier in my notes, the Burkholder-David-Gundy inequality relates the moments of the maximum of a local martingale M with its quadratic variation,

 $\displaystyle c_p^{-1}{\mathbb E}[[M]^{p/2}_\tau]\le{\mathbb E}[\bar M_\tau^p]\le C_p{\mathbb E}[[M]^{p/2}_\tau].$ (1)

Here, ${\bar M_t\equiv\sup_{s\le t}\lvert M_s\rvert}$ is the running maximum, ${[M]}$ is the quadratic variation, ${\tau}$ is a stopping time, and the exponent ${p}$ is a real number greater than or equal to 1. Then, ${c_p}$ and ${C_p}$ are positive constants depending on p, but independent of the choice of local martingale and stopping time. Furthermore, for continuous local martingales, which are the focus of this post, the inequality holds for all ${p > 0}$.

Since the quadratic variation used in my notes, by definition, starts at zero, the BDG inequality also required the local martingale to start at zero. This is not an important restriction, but it can be removed by requiring the quadratic variation to start at ${[M]_0=M_0^2}$. Henceforth, I will assume that this is the case, which means that if we are working with the definition in my notes then we should add ${M_0^2}$ everywhere to the quadratic variation ${[M]}$.

In keeping with the theme of the previous post on Doob’s inequalities, such martingale inequalities should have pathwise versions of the form

 $\displaystyle c_p^{-1}[M]^{p/2}+\int\alpha dM\le\bar M^p\le C_p[M]^{p/2}+\int\beta dM$ (2)

for predictable processes ${\alpha,\beta}$. Inequalities in this form are considerably stronger than (1), since they apply on all sample paths, not just on average. Also, we do not require M to be a local martingale — it is sufficient to be a (continuous) semimartingale. However, in the case where M is a local martingale, the pathwise version (2) does imply the BDG inequality (1), using the fact that stochastic integration preserves the local martingale property.

Lemma 1 Let X and Y be nonnegative increasing measurable processes satisfying ${X\le Y-N}$ for a local (sub)martingale N starting from zero. Then, ${{\mathbb E}[X_\tau]\le{\mathbb E}[Y_\tau]}$ for all stopping times ${\tau}$.

Proof: Let ${\tau_n}$ be an increasing sequence of bounded stopping times increasing to infinity such that the stopped processes ${N^{\tau_n}}$ are submartingales. Then,

$\displaystyle {\mathbb E}[1_{\{\tau_n\ge\tau\}}X_\tau]\le{\mathbb E}[X_{\tau_n\wedge\tau}]={\mathbb E}[Y_{\tau_n\wedge\tau}]-{\mathbb E}[N_{\tau_n\wedge\tau}]\le{\mathbb E}[Y_{\tau_n\wedge\tau}]\le{\mathbb E}[Y_\tau].$

Letting n increase to infinity and using monotone convergence on the left hand side gives the result. ⬜

Moving on to the main statements of this post, I will mention that there are actually many different pathwise versions of the BDG inequalities. I opt for the especially simple statements given in Theorem 2 below. See the papers Pathwise Versions of the Burkholder-Davis Gundy Inequality by Bieglböck and Siorpaes, and Applications of Pathwise Burkholder-Davis-Gundy inequalities by Soirpaes, for slightly different approaches, although these papers do also effectively contain proofs of (3,4) for the special case of ${r=1/2}$. As usual, I am using ${x\vee y}$ to represent the maximum of two numbers.

Theorem 2 Let X and Y be nonnegative continuous processes with ${X_0=Y_0}$. For any ${0 < r\le1}$ we have,

 $\displaystyle (1-r)\bar X^r\le (3-2r)\bar Y^r+r\int(\bar X\vee\bar Y)^{r-1}d(X-Y)$ (3)

and, if X is increasing, this can be improved to,

 $\displaystyle \bar X^r\le (2-r)\bar Y^r+r\int(\bar X\vee\bar Y)^{r-1}d(X-Y).$ (4)

If ${r\ge1}$ and X is increasing then,

 $\displaystyle \bar X^r\le r^{r\vee 2}\,\bar Y^r+r^2\int(\bar X\vee\bar Y)^{r-1}d(X-Y).$ (5)

# Pathwise Martingale Inequalities

Recall Doob’s inequalities, covered earlier in these notes, which bound expectations of functions of the maximum of a martingale in terms of its terminal distribution. Although these are often applied to martingales, they hold true more generally for cadlag submartingales. Here, I use ${\bar X_t\equiv\sup_{s\le t}X_s}$ to denote the running maximum of a process.

Theorem 1 Let X be a nonnegative cadlag submartingale. Then,

• ${{\mathbb P}\left(\bar X_t \ge K\right)\le K^{-1}{\mathbb E}[X_t]}$ for all ${K > 0}$.
• ${\lVert\bar X_t\rVert_p\le (p/(p-1))\lVert X_t\rVert_p}$ for all ${p > 1}$.
• ${{\mathbb E}[\bar X_t]\le(e/(e-1)){\mathbb E}[X_t\log X_t+1]}$.

In particular, for a cadlag martingale X, then ${\lvert X\rvert}$ is a submartingale, so theorem 1 applies with ${\lvert X\rvert}$ in place of X.

We also saw the following much stronger (sub)martingale inequality in the post on the maximum maximum of martingales with known terminal distribution.

Theorem 2 Let X be a cadlag submartingale. Then, for any real K and nonnegative real t,

 $\displaystyle {\mathbb P}(\bar X_t\ge K)\le\inf_{x < K}\frac{{\mathbb E}[(X_t-x)_+]}{K-x}.$ (1)

This is particularly sharp, in the sense that for any distribution for ${X_t}$, there exists a martingale with this terminal distribution for which (1) becomes an equality simultaneously for all values of K. Furthermore, all of the inequalities stated in theorem 1 follow from (1). For example, the first one is obtained by taking ${x=0}$ in (1). The remaining two can also be proved from (1) by integrating over K.

Note that all of the submartingale inequalities above are of the form

 $\displaystyle {\mathbb E}[F(\bar X_t)]\le{\mathbb E}[G(X_t)]$ (2)

for certain choices of functions ${F,G\colon{\mathbb R}\rightarrow{\mathbb R}^+}$. The aim of this post is to show how they have a more general `pathwise’ form,

 $\displaystyle F(\bar X_t)\le G(X_t) - \int_0^t\xi\,dX$ (3)

for some nonnegative predictable process ${\xi}$. It is relatively straightforward to show that (2) follows from (3) by noting that the integral is a submartingale and, hence, has nonnegative expectation. To be rigorous, there are some integrability considerations to deal with, so a proof will be included later in this post.

Inequality (3) is required to hold almost everywhere, and not just in expectation, so is a considerably stronger statement than the standard martingale inequalities. Furthermore, it is not necessary for X to be a submartingale for (3) to make sense, as it holds for all semimartingales. We can go further, and even drop the requirement that X is a semimartingale. As we will see, in the examples covered in this post, ${\xi_t}$ will be of the form ${h(\bar X_{t-})}$ for an increasing right-continuous function ${h\colon{\mathbb R}\rightarrow{\mathbb R}}$, so integration by parts can be used,

 $\displaystyle \int h(\bar X_-)\,dX = h(\bar X)X-h(\bar X_0)X_0 - \int X\,dh(\bar X).$ (4)

The right hand side of (4) is well-defined for any cadlag real-valued process, by using the pathwise Lebesgue–Stieltjes integral with respect to the increasing process ${h(\bar X)}$, so can be used as the definition of ${\int h(\bar X_-)dX}$. In the case where X is a semimartingale, integration by parts ensures that this agrees with the stochastic integral ${\int\xi\,dX}$. Since we now have an interpretation of (3) in a pathwise sense for all cadlag processes X, it is no longer required to suppose that X is a submartingale, a semimartingale, or even require the existence of an underlying probability space. All that is necessary is for ${t\mapsto X_t}$ to be a cadlag real-valued function. Hence, we reduce the martingale inequalities to straightforward results of real-analysis not requiring any probability theory and, consequently, are much more general. I state the precise pathwise generalizations of Doob’s inequalities now, leaving the proof until later in the post. As the first of inequality of theorem 1 is just the special case of (1) with ${x=0}$, we do not need to explicitly include this here.

Theorem 3 Let X be a cadlag process and t be a nonnegative time.

1. For real ${K > x}$,
 $\displaystyle 1_{\{\bar X_t\ge K\}}\le\frac{(X_t-x)_+}{K-x}-\int_0^t\xi\,dX$ (5)

where ${\xi=(K-x)^{-1}1_{\{\bar X_-\ge K\}}}$.

2. If X is nonnegative and p,q are positive reals with ${p^{-1}+q^{-1}=1}$ then,
 $\displaystyle \bar X_t^p\le q^p X^p_t-\int_0^t\xi dX$ (6)

where ${\xi=pq\bar X_-^{p-1}}$.

3. If X is nonnegative then,
 $\displaystyle \bar X_t\le\frac{e}{e-1}\left( X_t \log X_t +1\right)-\int_0^t\xi\,dX$ (7)

where ${\xi=\frac{e}{e-1}\log(\bar X_-\vee1)}$.

# Martingales with Non-Integrable Maximum

It is a consequence of Doob’s maximal inequality that any ${L^p}$-integrable martingale has a maximum, up to a finite time, which is also ${L^p}$-integrable for any ${p > 1}$. Using ${X^*_t\equiv\sup_{s\le t}\lvert X_s\rvert}$ to denote the running absolute maximum of a cadlag martingale X, then ${X^*}$ is ${L^p}$-integrable whenever ${X}$ is. It is natural to ask whether this also holds for ${p=1}$. As martingales are integrable by definition, this is just asking whether cadlag martingales necessarily have an integrable maximum. Integrability of the maximum process does have some important consequences in the theory of martingales. By the Burkholder-Davis-Gundy inequality, it is equivalent to the square-root of the quadratic variation, ${[X]^{1/2}}$, being integrable. Stochastic integration over bounded integrands preserves the martingale property, so long as the martingale has integrable maximal process. The continuous and purely discontinuous parts of a martingale X are themselves local martingales, but are not guaranteed to be proper martingales unless X has integrable maximum process.

The aim of this post is to show, by means of some examples, that a cadlag martingale need not have an integrable maximum. Continue reading “Martingales with Non-Integrable Maximum”

# The Optimality of Doob’s Maximal Inequality

One of the most fundamental and useful results in the theory of martingales is Doob’s maximal inequality. Use ${X^*_t\equiv\sup_{s\le t}\lvert X_s\rvert}$ to denote the running (absolute) maximum of a process X. Then, Doob’s ${L^p}$ maximal inequality states that, for any cadlag martingale or nonnegative submartingale X and real ${p > 1}$,

 $\displaystyle \lVert X^*_t\rVert_p\le c_p \lVert X_t\rVert_p$ (1)

with ${c_p=p/(p-1)}$. Here, ${\lVert\cdot\rVert_p}$ denotes the standard Lp-norm, ${\lVert U\rVert_p\equiv{\mathbb E}[U^p]^{1/p}}$.

An obvious question to ask is whether it is possible to do any better. That is, can the constant ${c_p}$ in (1) be replaced by a smaller number. This is especially pertinent in the case of small p, since ${c_p}$ diverges to infinity as p approaches 1. The purpose of this post is to show, by means of an example, that the answer is no. The constant ${c_p}$ in Doob’s inequality is optimal. We will construct an example as follows.

Example 1 For any ${p > 1}$ and constant ${1 \le c < c_p}$ there exists a strictly positive cadlag ${L^p}$-integrable martingale ${\{X_t\}_{t\in[0,1]}}$ with ${X^*_1=cX_1}$.

For X as in the example, we have ${\lVert X^*_1\rVert_p=c\lVert X_1\rVert_p}$. So, supposing that (1) holds with any other constant ${\tilde c_p}$ in place of ${c_p}$, we must have ${\tilde c_p\ge c}$. By choosing ${c}$ as close to ${c_p}$ as we like, this means that ${\tilde c_p\ge c_p}$ and ${c_p}$ is indeed optimal in (1). Continue reading “The Optimality of Doob’s Maximal Inequality”

# The Maximum Maximum of Martingales with Known Terminal Distribution

In this post I will be concerned with the following problem — given a martingale X for which we know the distribution at a fixed time, and we are given nothing else, what is the best bound we can obtain for the maximum of X up until that time? This is a question with a long history, starting with Doob’s inequalities which bound the maximum in the ${L^p}$ norms and in probability. Later, Blackwell and Dubins (3), Dubins and Gilat (5) and Azema and Yor (1,2) showed that the maximum is bounded above, in stochastic order, by the Hardy-Littlewood transform of the terminal distribution. Furthermore, this bound is the best possible in the sense that there do exists martingales for which it can be attained, for any permissible terminal distribution. Hobson (7,8) considered the case where the starting law is also known, and this was further generalized to the case with a specified distribution at an intermediate time by Brown, Hobson and Rogers (4). Finally, Henry-Labordère, Obłój, Spoida and Touzi (6) considered the case where the distribution of the martingale is specified at an arbitrary set of times. In this post, I will look at the case where only the terminal distribution is specified. This leads to interesting constructions of martingales and, in particular, of continuous martingales with specified terminal distributions, with close connections to the Skorokhod embedding problem.

I will be concerned with the maximum process of a cadlag martingale X,

$\displaystyle X^*_t=\sup_{s\le t}X_s,$

which is increasing and adapted. We can state and prove the bound on ${X^*}$ relatively easily, although showing that it is optimal is more difficult. As the result holds more generally for submartingales, I state it in this case, although I am more concerned with martingales here.

Theorem 1 If X is a cadlag submartingale then, for each ${t\ge0}$ and ${x\in{\mathbb R}}$,

 $\displaystyle {\mathbb P}\left(X^*_t\ge x\right)\le\inf_{y < x}\frac{{\mathbb E}\left[(X_t-y)_+\right]}{x-y}.$ (1)

Proof: We just need to show that the inequality holds for each ${y < x}$, and then it immediately follows for the infimum. Choosing ${y < x^\prime < x}$, consider the stopping time

$\displaystyle \tau=\inf\{s\ge0\colon X_s\ge x^\prime\}.$

Then, ${\tau \le t}$ and ${X_\tau\ge x^\prime}$ whenever ${X^*_t \ge x}$. As ${f(z)\equiv(z-y)_+}$ is nonnegative and increasing in z, this means that ${1_{\{X^*_t\ge x\}}}$ is bounded above by ${f(X_{\tau\wedge t})/f(x^\prime)}$. Taking expectations,

$\displaystyle {\mathbb P}\left(X^*_t\ge x\right)\le{\mathbb E}\left[f(X_{\tau\wedge t})\right]/f(x^\prime).$

Since f is convex and increasing, ${f(X)}$ is a submartingale so, using optional sampling,

$\displaystyle {\mathbb P}\left(X^*_t\ge x\right)\le{\mathbb E}\left[f(X_t)\right]/f(x^\prime).$

Letting ${x^\prime}$ increase to ${x}$ gives the result. ⬜

The bound stated in Theorem 1 is also optimal, and can be achieved by a continuous martingale. In this post, all measures on ${{\mathbb R}}$ are defined with respect to the Borel sigma-algebra.

Theorem 2 If ${\mu}$ is a probability measure on ${{\mathbb R}}$ with ${\int\lvert x\rvert\,d\mu(x) < \infty}$ and ${t > 0}$ then there exists a continuous martingale X (defined on some filtered probability space) such that ${X_t}$ has distribution ${\mu}$ and (1) is an equality for all ${x\in{\mathbb R}}$.

# Properties of Quasimartingales

The previous two posts introduced the concept of quasimartingales, and noted that they can be considered as a generalization of submartingales and supermartingales. In this post we prove various basic properties of quasimartingales and of the mean variation, extending results of martingale theory to this situation.

We start with a version of optional stopping which applies for quasimartingales. For now, we just consider simple stopping times, which are stopping times taking values in a finite subset of the nonnegative extended reals ${\bar{\mathbb R}_+=[0,\infty]}$. Stopping a process can only decrease its mean variation (recall the alternative definitions ${{\rm Var}}$ and ${{\rm Var}^*}$ for the mean variation). For example, a process X is a martingale if and only if ${{\rm Var}(X)=0}$, so in this case the following result says that stopped martingales are martingales.

Lemma 1 Let X be an adapted process and ${\tau}$ be a simple stopping time. Then

 $\displaystyle {\rm Var}^*(X^\tau)\le{\rm Var}^*(X).$ (1)

Assuming, furthermore, that X is integrable,

 $\displaystyle {\rm Var}(X^\tau)\le{\rm Var}(X).$ (2)

and, more precisely,

 $\displaystyle {\rm Var}(X)={\rm Var}(X^\tau)+{\rm Var}(X-X^\tau)$ (3)

# The Burkholder-Davis-Gundy Inequality

The Burkholder-Davis-Gundy inequality is a remarkable result relating the maximum of a local martingale with its quadratic variation. Recall that [X] denotes the quadratic variation of a process X, and ${X^*_t\equiv\sup_{s\le t}\vert X_s\vert}$ is its maximum process.

Theorem 1 (Burkholder-Davis-Gundy) For any ${1\le p<\infty}$ there exist positive constants ${c_p,C_p}$ such that, for all local martingales X with ${X_0=0}$ and stopping times ${\tau}$, the following inequality holds.

 $\displaystyle c_p{\mathbb E}\left[ [X]^{p/2}_\tau\right]\le{\mathbb E}\left[(X^*_\tau)^p\right]\le C_p{\mathbb E}\left[ [X]^{p/2}_\tau\right].$ (1)

Furthermore, for continuous local martingales, this statement holds for all ${0.

A proof of this result is given below. For ${p\ge 1}$, the theorem can also be stated as follows. The set of all cadlag martingales X starting from zero for which ${{\mathbb E}[(X^*_\infty)^p]}$ is finite is a vector space, and the BDG inequality states that the norms ${X\mapsto\Vert X^*_\infty\Vert_p={\mathbb E}[(X^*_\infty)^p]^{1/p}}$ and ${X\mapsto\Vert[X]^{1/2}_\infty\Vert_p}$ are equivalent.

The special case p=2 is the easiest to handle, and we have previously seen that the BDG inequality does indeed hold in this case with constants ${c_2=1}$, ${C_2=4}$. The significance of Theorem 1, then, is that this extends to all ${p\ge1}$.

One reason why the BDG inequality is useful in the theory of stochastic integration is as follows. Whereas the behaviour of the maximum of a stochastic integral is difficult to describe, the quadratic variation satisfies the simple identity ${\left[\int\xi\,dX\right]=\int\xi^2\,d[X]}$. Recall, also, that stochastic integration preserves the local martingale property. Stochastic integration does not preserve the martingale property. In general, integration with respect to a martingale only results in a local martingale, even for bounded integrands. In many cases, however, stochastic integrals are indeed proper martingales. The Ito isometry shows that this is true for square integrable martingales, and the BDG inequality allows us to extend the result to all ${L^p}$-integrable martingales, for ${p> 1}$.

Theorem 2 Let X be a cadlag ${L^p}$-integrable martingale for some ${1, so that ${{\mathbb E}[\vert X_t\vert^p]<\infty}$ for each t. Then, for any bounded predictable process ${\xi}$, ${Y\equiv\int\xi\,dX}$ is also an ${L^p}$-integrable martingale.

# Martingales are Integrators

A major foundational result in stochastic calculus is that integration can be performed with respect to any local martingale. In these notes, a semimartingale was defined to be a cadlag adapted process with respect to which a stochastic integral exists satisfying some simple desired properties. Namely, the integral must agree with the explicit formula for elementary integrands and satisfy bounded convergence in probability. Then, the existence of integrals with respect to local martingales can be stated as follows.

Theorem 1 Every local martingale is a semimartingale.

This result can be combined directly with the fact that FV processes are semimartingales.

Corollary 2 Every process of the form X=M+V for a local martingale M and FV process V is a semimartingale.

Working from the classical definition of semimartingales as sums of local martingales and FV processes, the statements of Theorem 1 and Corollary 2 would be tautologies. Then, the aim of this post is to show that stochastic integration is well defined for all classical semimartingales. Put in another way, Corollary 2 is equivalent to the statement that classical semimartingales satisfy the semimartingale definition used in these notes. The converse statement will be proven in a later post on the Bichteler-Dellacherie theorem, so the two semimartingale definitions do indeed agree.

# Martingale Inequalities

Martingale inequalities are an important subject in the study of stochastic processes. The subject of this post is Doob’s inequalities which bound the distribution of the maximum value of a martingale in terms of its terminal distribution, and is a consequence of the optional sampling theorem. We work with respect to a filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}$. The absolute maximum process of a martingale is denoted by ${X^*_t\equiv\sup_{s\le t}\vert X_s\vert}$. For any real number ${p\ge 1}$, the ${L^p}$-norm of a random variable ${Z}$ is

$\displaystyle \Vert Z\Vert_p\equiv{\mathbb E}[|Z|^p]^{1/p}.$

Then, Doob’s inequalities bound the distribution of the maximum of a martingale by the ${L^1}$-norm of its terminal value, and bound the ${L^p}$-norm of its maximum by the ${L^p}$-norm of its terminal value for all ${p>1}$.

Theorem 1 Let ${X}$ be a cadlag martingale and ${t>0}$. Then

1. for every ${K>0}$,

$\displaystyle {\mathbb P}(X^*_t\ge K)\le\frac{\lVert X_t\rVert_1}{K}.$

2. for every ${p>1}$,

$\displaystyle \lVert X^*_t\rVert_p\le \frac{p}{p-1}\Vert X_t\Vert_p.$

3. $\displaystyle \lVert X^*_t\rVert_1\le\frac e{e-1}{\mathbb E}\left[\lvert X_t\rvert \log\lvert X_t\rvert+1\right].$