The *drawdown* of a stochastic process is the amount that it has dropped since it last hit its maximum value so far. For process *X* with running maximum *X*^{∗}_{t} = sup_{s ≤ t}*X*_{s}, the drawdown is thus *X*^{∗}_{t} – *X*_{t}, which is a nonnegative process. This is as in figure 1 below.

The previous post used the reflection principle to show that the maximum of a Brownian motion has the same distribution as its terminal absolute value. That is, *X*^{∗}_{t} and |*X*_{t}| are identically distributed.

For a process *X* started from zero, its maximum and drawdown can be written as *X*^{∗}_{t} – *X*_{0} and *X*^{∗}_{t} – *X*_{t}. Reversing the process in time across the interval [0, *t*] will exchange these values. So, reversing in time and translating so that it still starts from zero will exchange the maximum value and the drawdown. Specifically, write

for time index 0 ≤ *s* ≤ *t*. The maximum of *Y* is equal to the drawdown of *X*,

If *X* is standard Brownian motion then so is *Y*, since the independent normal increments property for *Y* follows from that of *X*. As already stated, the maximum *Y*^{∗}_{t} = *X*^{∗}_{t} – *X*_{t} has the same distribution as the absolute value |*Y*_{t}|= |*X*_{t}|. So, the drawdown has the same distribution as the absolute value at each time.

Lemma 1IfXis standard Brownian motion, thenX^{∗}_{t}–X_{t}has the same distribution as|X_{t}|at each timet≥ 0.

This equivalence of distributions can be taken much further. Whereas the post on the reflection principle only compared distributions at fixed times, it turns out that the joint distributions of the entire drawdown *process* are equivalent to those of the reflecting Brownian motion |*X*|. Certainly, this is far from true for the running maximum *X*^{∗} which is increasing and never hits zero after time 0, in contrast to |*X*| which is never increasing over any nontrivial time intervals, and hits 0 every time that *X* changes sign. On the other hand, the drawdown *X*^{∗} – *X* moves along with the Brownian motion –*X* except for the times at which *X* hits a maximum, in which case the drawdown hits 0. This is qualitatively similar to the paths of a reflecting Brownian motion, and we can show that they are identically distributed.

The idea is to show that both the absolute value and drawdown of a Brownian motion are Markov with the same transition probabilities, from which it follows that they have the same joint distribution. This requires looking at the distribution of the drawdown of *X* started at each positive time *s* > 0. However, the maximum value of the forward started paths *Y*_{t} = *X*_{s + t} are not the same as the forward started paths of *X*^{∗}, since the latter also maximizes over the values of *X*_{u} for *u* < *s*. Instead, we have *X*^{∗}_{s + t} = *X*^{∗}_{s} ∨ *Y*^{∗}_{t}. For this reason, we need a generalization of lemma 1 where the running maximum of the process is itself maxed with an independent value.

Consider fixing a value *M* ≥ 0, which could represent the maximum value of *X* at time zero (e.g., if there was some history of the process before time 0 with this value). So, the running maximum of *X* will be replaced by *M* ∨ *X*^{∗} and the drawdown is replaced by *M* ∨ *X*^{∗} – *X*. Lemma 1 continues across to this case.

Lemma 2IfXis standard Brownian motion andM≥ 0is a fixed value, thenM∨X^{∗}_{t}–X_{t}has the same distribution as|M–X_{t}|at each timet≥ 0.

This can be proved as an extension of lemma 1, noting first that processes *Y*_{s} = *M* ∨ *X*^{∗}_{s} – *X*_{s} and *Z*_{s} = |*M* – *X*_{s}| are equivalent up until the first time at which *X* hits level *M*. Denoting this time by *τ* = inf{*s* ≥ 0: *X*_{s} ≥ *M*}, then *Y*_{t} = *Z*_{t} on the event that *τ* ≥ *t*. On the other hand, conditioning on any value of *τ* < *t*, the strong Markov property says that *X̃*_{s} = *X*_{τ + s} – *X*_{τ} is a Brownian motion and, hence, lemma 1 says that *Y*_{t} = *X̃*^{∗}_{t - τ} – *X̃*_{t - τ} and *Z*_{t} = |*X̃*_{t - τ}| have the same distribution, proving lemma 2.

In the post on the reflection principle we spent some time constructing the joint distribution of *X*^{∗}_{t} and *X*_{t}. Now, when looking at the distributions of *X*^{∗}_{t} – *X*_{t} and *M* ∨ *X*^{∗}_{t} – *X*_{t}, we did not use that result and, instead, employed a time reversal trick. We could, of course, have used the joint distribution as expressed by theorems 2 or 3 of that post instead. It was just illuminating to see how a simple time reversal converts the maximum process *X*^{∗} into the drawdown *X*^{∗} – *X*, so that lemma 1 is clearly equivalent to the result that *X*^{∗}_{t} and |*X*_{t}| have the same distribution.

As it is good to have multiple methods of approaching a problem, let us also look at how the drawdown distribution also follows from the known joint distribution of *X*^{∗}_{t} and *X*_{t}. Setting *Y*_{t} = *M* ∨ *X*^{∗}_{t} – *X*_{t}, theorem 3 of the previous post gives

This holds on the event *M* – *X*_{t} ≤ *a* otherwise, when *M* – *X*_{t} > *a*, we automatically have *Y*_{t} > *a*. So, setting *Z*_{t} = *M* – *X*_{t} and taking expectations,

By either writing out the integral with respect the the normal density in the second term on the right hand side, or by applying a change of measure which shifts the mean of *Z*_{t} by 2*a*, this gives

Hence, *Y*_{t} and |*Z*_{t}| have the same distribution, giving a second proof of lemma 2.

Finally, we can make use of the drawdown distribution as expressed in lemma 2 to find its joint distribution.

Theorem 3IfXis standard Brownian motion, thenX^{∗}_{t}–X_{t}and|X_{t}|have identical joint distributions over0 ≤t< ∞.

To prove this, it is sufficient to show that both processes are Markov with respect to the natural filtration ℱ_{·} of *X*, with the same transition probabilities and initial state. So, for times *s* < *t*, we need to show that the distribution of *Y*_{t} = *X*^{∗}_{t} – *X*_{t} conditioned on ℱ_{s} depends only on *Y*_{s} in the same way that the conditional distribution of *Z*_{t} = |*X*_{t}| depends on *Z*_{s}.

Considering the standard Brownian motion *X̃*_{u} = *X*_{s + u} – *X*_{s}, which is independent of ℱ_{s}, we have

so, by lemma 2 with *X*^{∗}_{s} – *X*_{s} in place of *M*, *X̃* in place of *X*, and *t* – *s* in place of *t*, this has the same distribution as |(*X*^{∗}_{s} - *X*_{s}) – *X̃*_{t - s}|= |*X*^{∗}_{s} – *X*_{t}| when conditioned on ℱ_{s}. This is the absolute value of a normal with mean *Y*_{s} and variance *t* – *s*.

From the definition of Brownian motion, *Z*_{t} conditioned on ℱ_{s} is the absolute value of a normal with mean *Z*_{s} and variance *t* – *s*. This shows that *Y* and *Z* are both Markov with the same transition probabilities and with starting state *Y*_{0} = *Z*_{0} = 0, proving theorem 3. In fact, we did not make any use of the initial states of *Y*, *Z* other than that they are equal, so the exact same argument holds if we set *Y*_{t} = *M* ∨ *X*^{∗}_{t} – *X*_{t} and *Z*_{t} = |*X*_{t} – *M*| giving the following generalization.

Theorem 4IfXis standard Brownian motion andM≥ 0is a constant, thenM∨X^{∗}_{t}–X_{t}and|M–X_{t}|have identical joint distributions over0 ≤t< ∞.