In this post I will be concerned with the following problem — given a martingale *X* for which we know the distribution at a fixed time, and we are given nothing else, what is the best bound we can obtain for the maximum of *X* up until that time? This is a question with a long history, starting with Doob’s inequalities which bound the maximum in the norms and in probability. Later, Blackwell and Dubins (3), Dubins and Gilat (5) and Azema and Yor (1,2) showed that the maximum is bounded above, in stochastic order, by the Hardy-Littlewood transform of the terminal distribution. Furthermore, this bound is the best possible in the sense that there do exists martingales for which it can be attained, for any permissible terminal distribution. Hobson (7,8) considered the case where the starting law is also known, and this was further generalized to the case with a specified distribution at an intermediate time by Brown, Hobson and Rogers (4). Finally, Henry-Labordère, Obłój, Spoida and Touzi (6) considered the case where the distribution of the martingale is specified at an arbitrary set of times. In this post, I will look at the case where only the terminal distribution is specified. This leads to interesting constructions of martingales and, in particular, of *continuous* martingales with specified terminal distributions, with close connections to the Skorokhod embedding problem.

I will be concerned with the maximum process of a cadlag martingale *X*,

which is increasing and adapted. We can state and prove the bound on relatively easily, although showing that it is optimal is more difficult. As the result holds more generally for submartingales, I state it in this case, although I am more concerned with martingales here.

Theorem 1IfXis a cadlag submartingale then, for each and ,

(1)

*Proof:* We just need to show that the inequality holds for each , and then it immediately follows for the infimum. Choosing , consider the stopping time

Then, and whenever . As is nonnegative and increasing in *z*, this means that is bounded above by . Taking expectations,

Since *f* is convex and increasing, is a submartingale so, using optional sampling,

Letting increase to gives the result. ⬜

The bound stated in Theorem 1 is also optimal, and can be achieved by a *continuous* martingale. In this post, all measures on are defined with respect to the Borel sigma-algebra.

Theorem 2If is a probability measure on with and then there exists acontinuousmartingaleX(defined on some filtered probability space) such that has distribution and (1) is an equality for all .

I will not prove this yet, as the construction of martingales verifying this result will be given further below. The proof of Theorem 1 given above should, however, give a good clue as to how the optimal bound can be attained. In order for the (submartingale) inequality used to actually be an equality, it is required that the process be a martingale starting from the time , and to be equal to 0 at time *t* if the level *x* is not reached. This is the case if, for some , we have whenever and whenever . Martingales constructed with this property will be given below.

Theorem 2 is particularly strong, as not only does it imply that the bound (1) is optimal, but also that there exists a single continuous martingale making (1) an equality simultaneously for *all* values of *x*. A consequence of this is that we also achieve an optimal upper bound for for all bounded increasing functions . This is maybe best understood in terms of the *stochastic order* on measures, denoted by . We write for probability measures on if either of the following equivalent conditions are satisfied.

- for all real
*x*. - for all bounded increasing .
- for all nonnegative increasing .
- there exists some probability space with real random variables with laws respectively such that (a.s.).

The equivalence of these conditions is straightforward, with only the existence of the random variables *X*,*Y* in the final statement needing further explanation. If , then, for any uniform random variable *U* on the unit interval, and will have laws respectively, and satisfy .

Next, the Hardy-Littlewood transform of a measure with is defined by

(2) |

It can be seen that this is left-continuous and decreasing from 1 to 0 as *x* increases from to , so gives a well-defined Borel measure . As will be seen below in Lemma 6, the Hardy-Littlewood transform can alternatively be defined as follows. If has law , for a decreasing integrable function *h* and uniform random variable *U* on , then has law where is the running average of *h*. Next, I denote the law of a real random variable *V* by .

One construction of a martingale with terminal distribution is given simply by setting for all , in which case . This means that the probability in (1) must be 1 for all . Correspondingly, in that case, should be equal to 1. This can also be seen directly from the definition (2) since, by Jensen’s inequality

With this notation, Theorem 1 can be restated as follows.

Theorem 3IfXis a cadlag martingale then for all .

Similarly, Theorem 2 can be restated.

Theorem 4If is a probability measure on with and then there exists acontinuousmartingaleX(defined on some filtered probability space) such that and .

As previously noted, a particular strength of this result is that there exists a martingale simultaneously maximizing for all *x*. A-priori, it does not seem obvious, or even very likely at all, that this should be possible.

The optimal bound (1) and measure are easily understood a bit of graphical help. For now, and for the remainder of the post, I fix to be a measure on with , let its mean be , and be its Hardy-Littlewood transform (2). The measure can be represented by a function

This is a non-negative, convex and decreasing function of *x*. Also, Jensen’s inequality shows that , and an application of dominated convergence gives the limit

as . If has distribution then (1) becomes

See Figure 1. The right hand side is equal to the absolute gradient of the line passing through and . The minimum is attained precisely when the curve is a tangent to *c*, which always occurs for some , so long as and . If the minimum occurs at , then the link between the martingale and its maximum in the optimal case is given by .

Alternatively, as *c* is convex then it has a left (and right) hand derivative everywhere, denoted by . A tangent at *x* is given by the line . This crosses the *x*-axis at the point

(3) |

We note here that the convexity of *c* together with the limit as ensures that whenever . So, is well defined, and is called the *barycenter* function of . It can also be written as

whenever . This is a left-continuous inverse to , so the relation between the optimal martingale and its maximum is given by — at least, when is continuous. More generally, we will have .

#### Constructing a Cadlag Solution

I will now show how to construct an example of a martingale with specified terminal distribution for which the maximum attains the optimal bound described by Theorem 1. We may as well fix the terminal time to be at , so we do this now. To start with, I give a non-continuous example and leave the construction of a continuous martingale for later. The example given here is interesting in its own right, and is a relatively straightforward construction which is useful in practice for constructing martingales with specified terminal law.

To start, we can state conditions on the martingale for the optimal maximum to be attained. These conditions can be obtained simply by going through the proof of Theorem 1 above and checking when each of the inequalities can be replaced by equality. In the following, is allowed to be because, when the law of is unbounded below, then it can be seen that as , so we must take in order for it to be increasing. This is not a problem because we will always have (a.s.) whenever is not deterministic.

Lemma 5Let be a cadlag martingale satisfying

- almost surely.
- is continuous.
- (a.s.) for an increasing function with for all .

Then, .

*Proof:* Choosing any , we need to show that achieves the upper bound (1). First, if then we have , so the probability is equal to 1 in (1), as required. We just need to consider . As , replacing by if necessary, we can suppose that .

Let be the stopping time

By continuity of , we have . If then and . So,

Now, if then . So, applying the martingale property to ,

Taking expectations,

as required. ⬜

Now, we move on to a construction. Start with any probability space on which there exists a random variable *U* with the uniform law on . Define the filtration

Next, choose any decreasing integrable function and define the running average, , as

(4) |

Now, we define the cadlag process

(5) |

It can be seen that , so that *X* is a martingale. Its terminal distribution and law are

The function *h* can be chosen such that the terminal law is equal to any distribution that we like, by setting

Finally, the martingale just constructed does have the optimal maximum law.

Lemma 6The martingaleXconstructed above, by (5), satisfies .

*Proof:* We just need to show that the conditions of Lemma 5 are satisfied. From the definition,

Also, since *h* is decreasing, is decreasing and,

is continuous. To complete the proof, we just need to construct the increasing function such that

for all . As are both decreasing, it needs to be shown that *h* is constant on any interval for which is constant. Replacing by its left-limit if necessary, we can suppose that *h* is left-continuous. Then, so, if for any , then . As *h* is left-continuous and decreasing, this means that for almost every and, therefore, *h* is constant on the interval . So, is constant over . ⬜

In the construction given here, the martingale *X* follows the deterministic, continuous and increasing curve , up until a stopping time , after which . See Figure 2. We could have constructed the solution directly from this description, although the construction above from the intermediate function *h* is useful for describing solutions in practice.

#### Constructing a Continuous Solution

I will now construct a continuous solution, which will be given by a deterministic time change of stopped Brownian motion. This method has very close connections with the Skorokhod embedding problem. Given a Brownian motion *B*, this asks for a stopping time such that is a uniformly integrable martingale and has a prescribed distribution. In fact, the construction I give here is essentially given by the Azema-Yor solution to the stopping problem.

The idea is to take a Brownian motion *B* starting from *m* and define a stopping time to be the first time at which . Equivalently, it can be constructed from the barycenter function (3) by stopping at the first time for which . This ensures that . After a deterministic time-change, say , this gives a continuous local martingale *X* satisfying as for the cadlag solution above. See Figure 3, where is the stopping time for the time-changed process. It is possible to derive the law for and from the relation so, in particular, they must be the same as for the cadlag solution above. Although this can be done directly, the proof is made significantly easier through the use of *Azema-Yor processes*. The process *M* in the following lemma is known as an Azema-Yor process and the result holds for all measurable and locally bounded *u*, although such generality is unnecessary here.

Lemma 7 (Azema-Yor)LetXbe a semimartingale such that is continuous and be continuously differentiable. Then, setting , the processis a semimartingale satisfying

(6)

In particular, ifXis a local martingale then so isM.

*Proof:* As is a continuous increasing process and *u* is continuously differentiable, is a continuous FV process and, so, has zero quadratic variation. Using , we apply integration by parts,

(7) |

Next, using the fact that is constant over any interval on which , and that is a countable union of such intervals gives

Putting this back in to (7) gives (6) as required. Finally, if *X* is a cadlag local martingale then, as is locally bounded, equation (6) shows that *M* is also a local martingale. ⬜

Now, we show that the law of the maximum of a martingale can be derived from the relation . In the following, in order to handle the case where we do not yet know that *X* is is a proper martingale, rather than just a local martingale, we also impose the condition that . It can be seen that this follows from the first condition, so is redundant, whenever *X* is a proper martingale.

Lemma 8Let be a cadlag local martingale such that (a.s.) and is continuous. Suppose that is increasing with for and,

- (a.s.),
- (a.s.) for all
t.Then, is the unique right-continuous function satisfying the ODE

(8)

for , and for .

*Proof:* The condition that immediately gives for . For any twice continuously differentiable with compact support in , set and let *M* be the local martingale defined in Lemma 7. As is bounded by some and has support in for some ,

is bounded. Hence, *M* is uniformly bounded, and is a proper martingale. Therefore and,

On the left hand side, we substitute in and use the fact that the law of is given by . On the right, we replace by to obtain,

This proves (8) over . Stopping *X* as soon as it exceeds *m* gives a bounded martingale and, taking expected values gives

Rearranging,

and, so, (8) holds for .

Finally, suppose that *G* is another solution to (8) over with for . Setting then over and,

over . As is nonnegative, this implies that is decreasing, so it is identically zero, and the solution is unique. ⬜

I can now describe a continuous martingale and prove that it has the required terminal and maximum distribution. To start, we define an increasing function . The idea, as explained above, is that is chosen to maximise whenever . In order to obtain a quick proof making use of the cadlag martingale construction above, it will be more convenient to choose as in the proof of Lemma 6 above. We first let be a decreasing function such that has distribution for uniformly distributed random variables *U* on , and let be its running average (4). Without loss of generality, we take *h* to be left-continuous. As explained in the proof of Lemma 6, we can write *h* as a function of ,

This uniquely defines as a right-continuous and increasing function on the image of *h*. It can be seen that does indeed maximise , although I will not use this fact in the proof. We can extend to all of by setting and, for , set . Then, is right-continuous and increasing.

Now, let *B* be a standard Brownian motion defined on some filtered probability space , and starting from . Define the stopping time

The recurrence of Brownian motion implies that is almost surely finite. Indeed, for any , we have and, so, is finite. Hence, for some , with probability 1, and . Next, continuity of *B* ensures that and for all *t*.

Apply the deterministic time-change

over (in particular, ). Then, by the martingale property for the stopped process , will be a martingale with respect to the time-changed filtration

over . As as , *X* is a continuous local martingale.

Finally, I show that *X* is a martingale with the required terminal and maximum distributions.

Lemma 9The processXis a proper martingale with and .

*Proof:* By Lemma 8, the distribution of , and hence of , is uniquely determined by the property that *X* is a local martingale with , , and . So, they are the same as for the solution given by (5), for which and, by Lemma 6, .

It only remains to prove that *X* is a proper martingale. Choosing , We have over . As is a nonnegative local martingale over , it is a supermartingale,

However, we know that has distribution , so has mean *m*. Also, as is a proper martingale over with , also has mean *m*. This implies that

is nonnegative with zero mean. Hence, *X* is a proper martingale. ⬜

#### References

- Azéma, J., and Yor, M. (1979)
*Une solution simple au problème de Skorokhod*. Séminaire de Probabilités XIII, Lecture Notes in Math. Vol. 721, 90–115. doi:10.1007/BFb0070852 - Azéma, J., and Yor, M. (1979)
*Le problème de Skorokhod: Compléments à “Une solution simple au problème de Skorokhod”*. Séminaire de Probabilités XIII, Lecture Notes in Math. Vol. 721, 625–633. doi:10.1007/BFb0070901 - Blackwell, D., and Dubins, L.E. (1963)
*A converse to the dominated convergence theorem*. Illinois J. Math. Vol. 7, no. 3, 508–514. link. - Brown, H., Hobson, D.G., and Rogers, L.C.G. (2001)
*The maximum maximum of a martingale constrained by an intermediate law*. Probab. Theory Related Fields, Vol. 119, 558–578. doi:10.1007/PL00008771 - Dubins, L.E., and Gilat, D. (1978).
*On the distribution of maxima of martingales*. Proc. Amer. Math. Soc., Vol. 68, 337–338. doi:10.2307/2043117. Also full-text PDF. - Henry-Labordère, P., Obłój, J., Spoida, P., and Touzi, N. (2016)
*The maximum maximum of a martingale with given n marginals*. Annals of Applied Probability, Vol. 26, No. 1, 1-44. doi:10.1214/14-AAP1084. Also available at arXiv:1203.6877. - Hobson, D.G. (1998)
*The maximum maximum of a martingale*. Séminaire de Probabilités XXXII, Vol. 1686, 250–263. doi:10.1007/BFb0101762. Free ps file available from his website. - Hobson, D.G. (1998)
*Robust hedging of the lookback option*. Finance Stoch., Vol. 2, 329–347. doi:10.1007/s007800050044 Free ps file available from his website.