The Brownian Drawdown Process

The drawdown of a stochastic process is the amount that it has dropped since it last hit its maximum value so far. For process X with running maximum Xt = sups ≤ tXs, the drawdown is thus Xt – Xt, which is a nonnegative process. This is as in figure 1 below.

Brownian motion drawdown
Figure 1: Brownian motion and its drawdown process

The previous post used the reflection principle to show that the maximum of a Brownian motion has the same distribution as its terminal absolute value. That is, Xt and |Xt| are identically distributed.

For a process X started from zero, its maximum and drawdown can be written as Xt – X0 and Xt – Xt. Reversing the process in time across the interval [0, t] will exchange these values. So, reversing in time and translating so that it still starts from zero will exchange the maximum value and the drawdown. Specifically, write

\displaystyle  Y_s = X_{t-s} - X_t

for time index 0 ≤ s ≤ t. The maximum of Y is equal to the drawdown of X,

\displaystyle  Y^*_t = X^*_t-X_t.

If X is standard Brownian motion then so is Y, since the independent normal increments property for Y follows from that of X. As already stated, the maximum Yt = Xt – Xt has the same distribution as the absolute value |Yt|= |Xt|. So, the drawdown has the same distribution as the absolute value at each time.

Lemma 1 If X is standard Brownian motion, then Xt – Xt has the same distribution as |Xt| at each time t ≥ 0.

This equivalence of distributions can be taken much further. Whereas the post on the reflection principle only compared distributions at fixed times, it turns out that the joint distributions of the entire drawdown process are equivalent to those of the reflecting Brownian motion |X|. Certainly, this is far from true for the running maximum X which is increasing and never hits zero after time 0, in contrast to |X| which is never increasing over any nontrivial time intervals, and hits 0 every time that X changes sign. On the other hand, the drawdown X – X moves along with the Brownian motion X except for the times at which X hits a maximum, in which case the drawdown hits 0. This is qualitatively similar to the paths of a reflecting Brownian motion, and we can show that they are identically distributed.

The idea is to show that both the absolute value and drawdown of a Brownian motion are Markov with the same transition probabilities, from which it follows that they have the same joint distribution. This requires looking at the distribution of the drawdown of X started at each positive time s > 0. However, the maximum value of the forward started paths Yt = Xs + t are not the same as the forward started paths of X, since the latter also maximizes over the values of Xu for u < s. Instead, we have Xs + t = Xs ∨ Yt. For this reason, we need a generalization of lemma 1 where the running maximum of the process is itself maxed with an independent value.

Consider fixing a value M ≥ 0, which could represent the maximum value of X at time zero (e.g., if there was some history of the process before time 0 with this value). So, the running maximum of X will be replaced by M ∨ X and the drawdown is replaced by M ∨ X – X. Lemma 1 continues across to this case.

Lemma 2 If X is standard Brownian motion and M ≥ 0 is a fixed value, then M ∨ Xt – Xt has the same distribution as |M – Xt| at each time t ≥ 0.

This can be proved as an extension of lemma 1, noting first that processes Ys = M ∨ Xs – Xs and Zs = |M – Xs| are equivalent up until the first time at which X hits level M. Denoting this time by τ = inf{s ≥ 0: Xs ≥ M}, then Yt = Zt on the event that τ ≥ t. On the other hand, conditioning on any value of τ < t, the strong Markov property says that s = Xτ + s – Xτ is a Brownian motion and, hence, lemma 1 says that Yt = t - τ – t - τ and Zt = |t - τ| have the same distribution, proving lemma 2.

In the post on the reflection principle we spent some time constructing the joint distribution of Xt and Xt. Now, when looking at the distributions of Xt – Xt and M ∨ Xt – Xt, we did not use that result and, instead, employed a time reversal trick. We could, of course, have used the joint distribution as expressed by theorems 2 or 3 of that post instead. It was just illuminating to see how a simple time reversal converts the maximum process X into the drawdown X – X, so that lemma 1 is clearly equivalent to the result that Xt and |Xt| have the same distribution.

As it is good to have multiple methods of approaching a problem, let us also look at how the drawdown distribution also follows from the known joint distribution of Xt and Xt. Setting Yt = M ∨ Xt – Xt, theorem 3 of the previous post gives

\displaystyle  \begin{aligned} {\mathbb P}(Y_t >a\vert\; X_t) &={\mathbb P}(X^*_t >a+X_t\vert\; X_t)\\ &=e^{-2(a+X_t)t^{-1}a}. \end{aligned}

This holds on the event M – Xt ≤ a otherwise, when M – Xt > a, we automatically have Yt > a. So, setting Zt = M – Xt and taking expectations,

\displaystyle  {\mathbb P}(Y_t > a)={\mathbb P}(Z_t > a) + {\mathbb E}[1_{\{Z_t\le a\}}e^{-2(a+M-Z_t)t^{-1}a}].

By either writing out the integral with respect the the normal density in the second term on the right hand side, or by applying a change of measure which shifts the mean of Zt by 2a, this gives

\displaystyle  \begin{aligned} {\mathbb P}(Y_t > a) &={\mathbb P}(Z_t > a) + {\mathbb P}(Z_t\le -a)\\ &={\mathbb P}(\lvert Z_t\rvert > a). \end{aligned}

Hence, Yt and |Zt| have the same distribution, giving a second proof of lemma 2.

Finally, we can make use of the drawdown distribution as expressed in lemma 2 to find its joint distribution.

Theorem 3 If X is standard Brownian motion, then Xt – Xt and |Xt| have identical joint distributions over 0 ≤ t < ∞.

To prove this, it is sufficient to show that both processes are Markov with respect to the natural filtration · of X, with the same transition probabilities and initial state. So, for times s < t, we need to show that the distribution of Yt = Xt – Xt conditioned on s depends only on Ys in the same way that the conditional distribution of Zt = |Xt| depends on Zs.

Considering the standard Brownian motion u = Xs + u – Xs, which is independent of s, we have

\displaystyle  Y_t=(X^*_s-X_s)\vee \tilde X^*_{t-s} - \tilde X_{t-s}

so, by lemma 2 with Xs – Xs in place of M, in place of X, and t – s in place of t, this has the same distribution as |(Xs - Xs) – t - s|= |Xs – Xt| when conditioned on s. This is the absolute value of a normal with mean Ys and variance t – s.

From the definition of Brownian motion, Zt conditioned on s is the absolute value of a normal with mean Zs and variance t – s. This shows that Y and Z are both Markov with the same transition probabilities and with starting state Y0 = Z0 = 0, proving theorem 3. In fact, we did not make any use of the initial states of Y, Z other than that they are equal, so the exact same argument holds if we set Yt = M ∨ Xt – Xt and Zt = |Xt – M| giving the following generalization.

Theorem 4 If X is standard Brownian motion and M ≥ 0 is a constant, then M ∨ Xt – Xt and |M – Xt| have identical joint distributions over 0 ≤ t < ∞.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s