# Brownian Drawdowns

Here, I apply the theory outlined in the previous post to fully describe the drawdown point process of a standard Brownian motion. In fact, as I will show, the drawdowns can all be constructed from independent copies of a single ‘Brownian excursion’ stochastic process. Recall that we start with a continuous stochastic process X, assumed here to be Brownian motion, and define its running maximum as ${M_t=\sup_{s\le t}X_s}$ and drawdown process ${D_t=M_t-X_t}$. This is as in figure 1 above.

Next, ${D^a}$ was defined to be the drawdown ‘excursion’ over the interval at which the maximum process is equal to the value ${a \ge 0}$. Precisely, if we let ${\tau_a}$ be the first time at which X hits level ${a}$ and ${\tau_{a+}}$ be its right limit ${\tau_{a+}=\lim_{b\downarrow a}\tau_b}$ then,

 $\displaystyle D^a_t=D_{({\tau_a+t})\wedge\tau_{a+}}=a-X_{({\tau_a+t)}\wedge\tau_{a+}}.$

Next, a random set S is defined as the collection of all nonzero drawdown excursions indexed the running maximum,

 $\displaystyle S=\left\{(a,D^a)\colon D^a\not=0\right\}.$

The set of drawdown excursions corresponding to the sample path from figure 1 are shown in figure 2 below.

As described in the post on semimartingale local times, the joint distribution of the drawdown and running maximum ${(D,M)}$, of a Brownian motion, is identical to the distribution of its absolute value and local time at zero, ${(\lvert X\rvert,L^0)}$. Hence, the point process consisting of the drawdown excursions indexed by the running maximum, and the absolute value of the excursions from zero indexed by the local time, both have the same distribution. So, the theory described in this post applies equally to the excursions away from zero of a Brownian motion.

Before going further, let’s recap some of the technical details. The excursions lie in the space E of continuous paths ${z\colon{\mathbb R}_+\rightarrow{\mathbb R}}$, on which we define a canonical process Z by sampling the path at each time t, ${Z_t(z)=z_t}$. This space is given the topology of uniform convergence over finite time intervals (compact open topology), which makes it into a Polish space, and whose Borel sigma-algebra ${\mathcal E}$ is equal to the sigma-algebra generated by ${\{Z_t\}_{t\ge0}}$. As shown in the previous post, the counting measure ${\xi(A)=\#(S\cap A)}$ is a random point process on ${({\mathbb R}_+\times E,\mathcal B({\mathbb R}_+)\otimes\mathcal E)}$. In fact, it is a Poisson point process, so its distribution is fully determined by its intensity measure ${\mu={\mathbb E}\xi}$.

Theorem 1 If X is a standard Brownian motion, then the drawdown point process ${\xi}$ is Poisson with intensity measure ${\mu=\lambda\otimes\nu}$ where,

• ${\lambda}$ is the standard Lebesgue measure on ${{\mathbb R}_+}$.
• ${\nu}$ is a sigma-finite measure on E given by
 $\displaystyle \nu(f) = \lim_{\epsilon\rightarrow0}\epsilon^{-1}{\mathbb E}_\epsilon[f(Z^{\sigma})]$ (1)

for all bounded continuous continuous maps ${f\colon E\rightarrow{\mathbb R}}$ which vanish on paths of length less than L (some ${L > 0}$). The limit is taken over ${\epsilon > 0}$, ${{\mathbb E}_\epsilon}$ denotes expectation under the measure with respect to which Z is a Brownian motion started at ${\epsilon}$, and ${\sigma}$ is the first time at which Z hits 0. This measure satisfies the following properties,

• ${\nu}$-almost everywhere, there exists a time ${T > 0}$ such that ${Z > 0}$ on ${(0,T)}$ and ${Z=0}$ everywhere else.
• for each ${t > 0}$, the distribution of ${Z_t}$ has density
 $\displaystyle p_t(z)=z\sqrt{\frac 2{\pi t^3}}e^{-\frac{z^2}{2t}}$ (2)

over the range ${z > 0}$.

• over ${t > 0}$, ${Z_t}$ is Markov, with transition function of a Brownian motion stopped at zero.

I give the proof of this in a moment but, first, I briefly look at what it means and some simple consequences. For any measurable set ${A\in\mathcal B({\mathbb R}_+)\otimes\mathcal E}$, then ${\xi(A)}$ is the number of nonzero excursions ${D^a}$ for which ${(a,D^a)}$ is in A, and has a Poisson distribution of rate ${\mu(A)}$. The fact that the intensity is a product measure, ${\lambda\times\nu}$, means that for any ${A\in\mathcal E}$ and ${B\in\mathcal B({\mathbb R}_+)}$ then, the number of values ${a\in\mathcal B}$ for which ${D^a}$ is nonzero and lies in A is Poisson with rate ${\lambda(B)\nu(A)}$. In particular, the process ${Y_t=\xi([0,t]\times A)}$ counting the number of excursions lying in A is a Poisson process of rate ${\nu(A)}$. At least, this is true so long as ${\nu(A)}$ is finite.

Next, lets look at the stated properties of the measure ${\nu}$, and what these mean for the drawdowns. The first property simply states that, with probability one, for each ${a\ge0}$ such that the excursion ${D^a}$ is nonzero then there exists a time ${T > 0}$ such that ${D^a > 0}$ on ${(0,T)}$ and ${D^a=0}$ everywhere else. In fact, it is not difficult to see that ${T=\tau_{a+}-\tau_a}$ almost surely. This time T is the length of the excursion, which will be denoted by

 $\displaystyle {\rm len}(z)=\inf\left\{T\ge0\colon z_t=0{\rm\ for\ all\ }t\ge T\right\}$

and, ${\nu}$-almost everywhere is a positive real number.

Next, for each time ${t\ge0}$, ${Z_t}$ is ${\nu}$-almost everywhere nonnegative. The second stated property gives its distribution over the range ${z > 0}$. This is a measure, but need not be a probability measure as the total measure will not generally be equal to one. The measure of ${\{Z_t > 0\}}$ can be computed by integrating over the density,

 \displaystyle \begin{aligned} \nu(\{z\in E\colon z_t > 0\}) &=\sqrt{\frac 2{\pi t^3}}\int_0^\infty ze^{-\frac{z^2}{2t}}dz\\ &=\sqrt{\frac2{\pi t}}. \end{aligned}

So, we see that ${\{Z_t > 0\}}$ has finite measure but, since this tends to infinity as t goes to zero, this means that ${\nu}$ is not a finite measure. In fact, this gives

 $\displaystyle \nu\left(\{z\in E\colon z_t=0\}\right)=\infty$

for all times ${t \ge 0}$.

Noting that the excursion Z has length greater than a positive value T if and only if ${Z_T > 0}$, the intensity measure ${\nu_L}$ for the excursion lengths can be computed as

 $\displaystyle \nu_L([T,\infty))= \nu\left(Z_T > 0\right) =\sqrt{\frac2{\pi T}}.$

Equivalently, differentiating gives the density of the distribution of excursion lengths,

 $\displaystyle d\nu_L(T)=\frac{1}{\sqrt{2\pi}}T^{-3/2}dT.$ (3)

As ${\nu_L({\mathbb R}_+)}$ is infinite, this shows that there are infinitely many drawdowns over every nonzero time interval, although most of these will be of arbitrarily small length. We can also ask, how many drawdowns of at least length T can we expect before the Brownian motion hits a given level ${a\ge0}$. The calculation above shows that that this is Poisson distributed with parameter ${a\sqrt{\frac2{\pi T}}}$.

If U is a normal random variable with mean 0 and variance t (defined on some probability space), then ${p_t(z)}$ is just the probability density of ${\lvert U\rvert}$ multiplied by ${z/t}$, so that,

 $\displaystyle \nu\left(1_{\{Z_t\not=0\}}f(Z_t)\right)=\frac1t{\mathbb E}\left[\lvert U\rvert f(\lvert U\rvert)\right]$

for all bounded measurable ${f\colon{\mathbb R}\rightarrow{\mathbb R}}$, giving a usual alternative description of the distribution of ${Z_t}$. Equivalently, writing ${f_t(z)=f(z\sqrt t)/\sqrt t}$ then,

 $\displaystyle \nu\left(1_{\{Z_t\not=0\}}f(Z_t)\right)={\mathbb E}\left[\lvert U\rvert f_t(\lvert U\rvert)\right]$

for a standard normal random variable U defined on some probability space.

Finally, we look at the third property stating that Z is Markov with transition function of a Brownian motion stopped at zero. Recall that, with respect to a filtered probability space ${(\Omega,\mathcal F,\{\mathcal F_t\}_{t\ge0},{\mathbb P})}$ a (real-valued) process Z is Markov with transition function ${\{P_t\}_{t\ge0}}$ if and only if

 $\displaystyle {\mathbb E}[f(X_{s+t})\vert\mathcal F_s]=P_tf(X_s)$

almost surely for all bounded measurable ${f\colon{\mathbb R}\rightarrow{\mathbb R}}$. In the situation here, we can define the sigma-algebras ${\mathcal E_t=\sigma(Z_s\colon s\le t)}$ for all ${t > 0}$, giving us a ‘filtered measure space’ ${(E,\mathcal E,\{\mathcal E_t\}_{t > 0},\nu)}$. The fact that ${\nu}$ is not a probability measure, and is not even finite, does not matter. All that is needed for defining the Markov property is for conditional expectations with respect to the sigma-algebra ${\mathcal E_s}$ to be well defined. For this, we just need the restriction of ${\nu}$ to ${\mathcal E_s}$ to be sigma-finite. This is indeed the case since, for each ${0 < T \le s}$, the collection of paths of length at least T is in ${\mathcal E_s}$ and the union of these as T goes to zero, together with the zero path, gives all of E. Then, the expectation of a bounded measurable random variable ${U\colon E\rightarrow{\mathbb R}}$ conditional on ${\mathcal E_s}$ is defined to be the unique (up to ${\nu}$-almost everywhere equivalence) ${\mathcal E_s}$-measurable random variable ${{\mathbb E}_\nu[U\vert\mathcal E_s]}$ satisfying,

 $\displaystyle \nu\left({\mathbb E}_\nu[U\vert\mathcal E_s]V\right)=\nu(UV)$

for all integrable ${\mathcal E_s}$-measurable ${V\colon E\rightarrow{\mathbb R}}$. So, the Markov property for ${\nu}$ is defined by

 $\displaystyle {\mathbb E}_\nu[f(Z_{s+t})\vert\mathcal F_s]=P_tf(Z_s)$

${\nu}$-almost everywhere, for all times ${s,t > 0}$ and bounded measurable ${f\colon{\mathbb R}\rightarrow{\mathbb R}}$. Equivalently,

 $\displaystyle \nu\left(Uf(Z_{s+t})\right)=\nu\left(UP_tf(Z_s)\right)$

for all times ${s,t > 0}$, bounded measurable ${f\colon{\mathbb R}\rightarrow{\mathbb R}}$ and ${\nu}$-integrable ${\mathcal E_s}$-measurable ${U\colon E\rightarrow{\mathbb R}}$.

In particular if ${f\colon E\rightarrow{\mathbb R}_+}$ is of the form ${f(z)=g(z_1,\ldots,z_{t_n})}$ for some sequence of times ${0 < t_1\le \cdots\le t_n}$ and measurable ${g\colon{\mathbb R}^n\rightarrow{\mathbb R}_+}$ then,

 $\displaystyle \nu(f) = \int\cdots\int g(z_{t_1},\ldots,z_{t_n})P(z_{t_{n-1}},dz_{t_n})\cdots P(z_{t_1},dz_{t_2})\nu(dz_{t_1}).$

This is sufficient to completely determine ${\nu}$.

With that brief explanation of Markov processes with respect to sigma-finite measures out of the way, the third property is saying that for each ${s\ge0}$ and ${Z_s > 0}$, then ${\{Z_t\}_{t\ge s}}$ is a Brownian motion stopped at zero and initial distribution density given by ${p_s}$. For example, consider the excursions with maximum ${\sup_{t\ge0}z_t}$ greater than some positive value K. Since Brownian motion started at a level ${x}$ has probability ${x/K}$ of hitting level ${K\ge x}$ before zero, we see that,

 $\displaystyle {\mathbb P}_\nu[\sup_{s\ge t}Z_s\ge K\;\vert\mathcal E_t]=\frac {Z_t}K\wedge1.$

And, hence,

 \displaystyle \begin{aligned} \nu(\{z\in E\colon \sup_t z_t\ge K) &=\frac1K\lim_{t\rightarrow0}\nu(Z_t\wedge K)\\ &=\frac1K\lim_{t\rightarrow0}{\mathbb E}\left[\lvert U\rvert (\lvert U\rvert\wedge (K/\sqrt t)\right]\\ &=\frac1K{\mathbb E}[U^2]=\frac1K. \end{aligned}

where U is a standard normal random variable defined on some probability space. This can alternatively be computed using (1),

 \displaystyle \begin{aligned} \nu(\{z\in E\colon \sup_t z_t\ge K) &= \lim_{\epsilon\rightarrow0}\epsilon^{-1}{\mathbb P}_\epsilon[\sup_t Z_t^\sigma\ge K]\\ &= \lim_{\epsilon\rightarrow0}\epsilon^{-1}\frac{\epsilon}{K} =\frac1K. \end{aligned}

I ignored the requirement that ${1_{\{z_t > K\}}}$ be continuous in ${z_t}$ when applying (1) but, by smoothing with respect to K, it is straightforward to make this rigorous. So, consider the question of how many drawdowns of height K we can expect before the process hits a level ${a\ge0}$. By what we have just computed, this has the Poisson distribution with parameter ${a/K}$.

The transition probabilities for Z can be computed explicitly. For any ${x > 0}$, we let ${{\mathbb P}_x}$ denote the probability measure on ${(E,\mathcal E)}$ under which Z is a Brownian motion started at x. If ${f\colon{\mathbb R}\rightarrow{\mathbb R}}$ satisfies ${f(x)=0}$ for ${z\le 0}$ then, by the reflection principle,

 \displaystyle \begin{aligned} {\mathbb E}_x[f(Z_t^\sigma)] &={\mathbb E}_x[f(Z_t)]-{\mathbb E}_x[1_{\{\sigma\le t\}}f(Z_t)]\\ &={\mathbb E}_x[f(Z_t)]-{\mathbb E}_x[f(-Z_t)]. \end{aligned} (4)

The second equality holds here because, replacing ${Z}$ by ${-Z}$ after time ${\sigma}$ that it hits zero does not change its distribution, which follows from the strong Markov property and the symmetry of Brownian motion by reflection about zero. As ${Z_t}$ is normal with mean ${x}$ and variance ${t}$ under the measure ${{\mathbb P}_x}$, this gives the probability density of ${Z_t}$ expressed as a function of variable ${y}$ as,

 \displaystyle \begin{aligned} q_t(x,y) &=\frac1{\sqrt{2\pi t}}\left(e^{-\frac1{2t}(x-y)^2}-e^{-\frac1{2t}(x+y)^2}\right)\\ &=\frac{e^{-\frac1{t}(x^2+y^2)}}{\sqrt{2\pi t}}\left(e^{\frac{xy}{t}}-e^{-\frac{xy}{2t}}\right). \end{aligned} (5)

This is the transition density for Brownian motion stopped at zero, and is valid over ${x\ge0}$ and ${y > 0}$. As the single point ${y=0}$ has zero Lebesgue measure, it does not have a well defined probability density there. The probability ${P_t(x,\{0\})}$ is, however, given by ${1-P_t(x,(0,\infty))}$, so the transition probability is

 \displaystyle \begin{aligned} P_tf(x) &\equiv\int f(y)P(x,dy)\\ & = f(0) + \int_0^\infty q_t(x,y)(f(y) - f(0))dy, \end{aligned}

for all ${t > 0}$, ${x\ge 0}$ and measurable bounded functions ${f\colon{\mathbb R}\rightarrow{\mathbb R}}$.

It can be checked that the following necessary consistency conditions for the marginal densities and transition probabilities do indeed hold.

 \displaystyle \begin{aligned} &\int_0^\infty p_s(x)q_t(x,y)dx=p_{s+t}(y),\\ &\int_0^\infty p_s(x,y)p_t(y,z) dy = p_{s+t}(x,z). \end{aligned}

Proof of theorem 1: As standard Brownian motion is strong Markov, the fact that ${\xi}$ is Poisson is given by theorem 2 of the previous post. Then, by theorem 3 of the same post, the intensity measure ${\mu}$ satisfies

 $\displaystyle \int g(a) f(z)d\mu(a,z)=\lim_{\epsilon\rightarrow0}\int_0^\infty g(a)\epsilon^{-1}{\mathbb E}_a[f(a+\epsilon-Z^{\sigma_{a+\epsilon}})]da$

where ${\sigma_{a+\epsilon}}$ is the first time at which Z hits ${a+\epsilon}$. However, by symmetry, ${a+\epsilon-Z}$ is a Brownian motion started at ${\epsilon}$. Hence, ${{\mathbb E}_a[f(a+\epsilon-Z^{\sigma_{a+\epsilon}})]}$ is equal to ${{\mathbb E}_\epsilon[f(Z^\sigma)]}$.

 $\displaystyle \int g(a) f(z)d\mu(a,z)=\lim_{\epsilon\rightarrow0}\lambda(g)\epsilon^{-1}{\mathbb E}_\epsilon[f(Z^{\sigma})].$

So, choosing any nonnegative ${g}$ with nonzero integral, this implies the existence of a measure ${\nu}$ on E satisfying,

 $\displaystyle \nu(g)=\lambda(g)^{-1}\int g(a)f(z)d\mu(a,z) = \lim_{\epsilon\rightarrow0}\epsilon^{-1}{\mathbb E}_\epsilon[f(Z^\sigma)].$

Since this is independent of the choice of ${g}$, we have ${\mu=\lambda\times\nu}$ as required.

We now show that Z satisfies the Markov property under ${\nu}$. Let ${\{P_t\}_{t\ge0}}$ be the transition function for Brownian motion stopped at zero, and choose times ${s,t > 0}$. If ${f\colon{\mathbb R}\rightarrow{\mathbb R}}$ is continuous, then so is ${P_tf}$. Next, consider any bounded continuous map ${g\colon E\rightarrow{\mathbb R}}$ which is ${\mathcal E_s}$-measurable, and vanishes on paths of length less than ${L > 0}$. Then,

 \displaystyle \begin{aligned} \nu(g(Z)f(Z_{s+t})) &=\lim_{\epsilon\rightarrow0}\epsilon^{-1}{\mathbb E}_\epsilon[g(Z^\sigma)f(Z^\sigma_{s+t})]\\ &=\lim_{\epsilon\rightarrow0}\epsilon^{-1}{\mathbb E}_\epsilon[g(Z^\sigma)P_tf(Z^\sigma_s)]\\ &=\nu(g(Z)P_tf(Z_s)). \end{aligned}

By the monotone class theorem, this holds with ${g(Z)}$ replaced by any bounded ${\mathcal E_s}$-measurable random variable as required.

Next, we look at the distribution of ${Z_t}$. For any bounded and continuous ${f\colon{\mathbb R}\rightarrow{\mathbb R}}$ satisfying ${f(0)=0}$ then, using (5) for the transition probability of Brownian motion stopped at 0,

 \displaystyle \begin{aligned} \nu(f(Z_t))) &=\lim_{\epsilon\rightarrow0}\epsilon^{-1}{\mathbb E}_\epsilon[f(Z_t)]\\ &=\lim_{\epsilon\rightarrow0}\int_0^\infty\frac{e^{-\frac1{2t}(x^2+\epsilon^2)}}{\sqrt{2\pi t}}\epsilon^{-1}\left(e^{\frac{\epsilon x}{t}}-e^{-\frac{\epsilon x}{t}}\right)f(x)dx\\ &=\int_0^\infty p_t(x)f(x)dx, \end{aligned}

using bounded convergence.

All that remains is to show that ${\nu}$-almost everywhere, there is a positive time T such that ${Z > 0}$ on ${(0,T)}$ and ${Z=0}$ elsewhere. By construction and the fact that in the definition we restricted to nonzero drawdowns ${D^a\not=0}$, there is a positive time T such that Z is nonnegative on ${(0,T)}$ and is zero outside of this range, and that ${Z_t > 0}$ for times t arbitrarily close to T. However, we have already shown that Z is distributed as a Brownian motion stopped as soon as it hits zero at any positive time, so that ${Z_t > 0}$ for all ${0 < t < T}$. ⬜

As is well known, a standard Brownian motion is invariant under a simple scaling operation. More precisely, for any continuous path ${z\colon{\mathbb R}_+\rightarrow{\mathbb R}}$ and fixed ${T > 0}$, define the scaled path

 $\displaystyle S_T\circ z_t = T^{1/2}z_{t/T}$

For a standard Brownian motion X, then ${S_T\circ X}$ is another standard Brownian motion, as can be verified by noting that it remains centered Gaussian with independent increments and with the same variances. As the Brownian drawdown excursions were constructed from sample paths of a Brownian motion, we expect a similar symmetry to hold for the measure ${\nu}$. Since ${S_T}$ scales the length of each excursion by the factor T, it cannot simply be invariant. Note that the measure ${\nu_L}$ for the excursion length given by (3) is not invariant under ${S_T}$ but, instead, satisfies the symmetry

 $\displaystyle \int f(tT)\nu_L(dt)=T^{1/2}\nu_L(f),$ (6)

and ${\nu}$ does indeed satisfy the obvious extension of this.

Lemma 2 The measure ${\nu}$ for Brownian drawdown excursions satisfies the symmetry ${S_T*\nu=T^{1/2}\nu}$. That is,

 $\displaystyle \int f(S_T\circ z)d\nu(z)=T^{1/2}\nu(f)$ (7)

for all ${T > 0}$ and nonnegative measurable ${f\colon E\rightarrow{\mathbb R}}$.

Proof: Supposing that X is a standard Brownian motion then, the same is true of ${\tilde X=S_T\circ X}$ and, hence, their drawdown point processes have the same distribution. However, under the map ${S_T}$, each drawdown excursion ${D^a}$ corresponding to running maximum ${M=a}$, is mapped to ${S_T\circ D^a}$ corresponding to running maximum ${T^{1/2}a}$ of ${\tilde X}$. Hence,

 $\displaystyle \int f(a,z)d\xi(a,z)\overset{d}=\int f(T^{1/2}a,S_T\circ z)d\xi(a,z)$

for nonnegative measurable ${f\colon{\mathbb R}_+\times E\rightarrow{\mathbb R}}$. Taking expectations gives,

 $\displaystyle \int f(a,z)d\mu(a,z)=\int f(T^{1/2}a,S_T\circ z)d\mu(a,z).$

In particular, if ${f\colon E\rightarrow{\mathbb R}}$ is nonnegative and measurable then, replacing ${f(a,z)}$ by ${1_{\{a\le1\}}f(z)}$ in this identity gives,

 $\displaystyle \nu(f) = T^{-1/2}\int f(S_T\circ z)d\nu(z)$

as required. ⬜

As a consequence of this, the excursion paths can be expressed as a scaled version of a normalized excursion of unit length. The excursion length and the normalized path are actually independent variables. Furthermore, the measure for the normalized path is a true probability measure, rather than just sigma-finite. To be precise, consider any ${z\in E}$ with length ${T={\rm len}(z)}$, which we assume to be finite and strictly positive (as is the case for ${\nu}$-almost all paths). The normalized path is then,

 $\displaystyle \hat z= S_{T^{-1}}\circ z.$

The excursion is can reconstructed from its length together with the normalized path by scaling, ${z=S_T\circ \hat z}$. I use ${\hat E}$ for the paths ${z\colon E\rightarrow{\mathbb R}}$ of unit length and ${\hat{\mathcal E}}$ for the restriction of the sigma-algebra ${\mathcal E}$ to ${\hat E}$.

Theorem 3 Under the measure ${\nu}$ on E, the pair ${({\rm len}(Z),\hat Z)}$ are independent. That is, there exists a unique probability measure ${\hat\nu}$ on ${(\hat E,\hat{\mathcal E})}$ such that they have distribution ${\nu_L\times\hat\nu}$. Explicitly,

 $\displaystyle \int f({\rm len}(z),\hat z)d\nu(z)=\int\int f(T,z)d\hat\nu(z)d\nu_L(T)$ (8)

for all nonnegative measurable ${f\colon{\mathbb R}_+\times\hat E\rightarrow{\mathbb R}}$. Equivalently,

 $\displaystyle \nu(f)=\int\int f(S_T\circ z)d\hat\nu(z)d\nu_L(T)$ (9)

for all nonnegative measurable ${f\colon E\rightarrow{\mathbb R}}$.

Proof: Uniqueness is straightforward. If ${f\colon\hat E\rightarrow{\mathbb R}}$ is measurable and nonnegative, then (8) can be applied. Fixing nonnegative measurable ${g\colon{\mathbb R}_+\rightarrow{\mathbb R}_+}$ with ${\nu_L(g)=1}$,

 $\displaystyle \int f(\hat z)g({\rm len}(z))d\nu(z) =\hat\nu(f)\nu_L(g)=\hat\nu(f)$

uniquely defines ${\hat\nu}$.

Conversely, let us define the measure ${\hat\nu}$ by the identity above. Note that, substituting ${T^{-1}}$ in place of T in equation (3) for ${\nu_L}$ gives ${\nu(g)=\int g(T^{-1})Td\nu_L(T)}$. Hence, for any measurable ${f\colon{\mathbb R}_+\times\hat E\rightarrow{\mathbb R}_+}$, the symmetries of ${\nu}$ and ${\nu_L}$ give,

 \displaystyle \begin{aligned} \int f({\rm len}(z),\hat z)d\nu(z) &=\int\int f({\rm len}(z),\hat z)g(T^{-1})Td\nu(z)d\nu_L(T)\\ &=\int\int f(T{\rm len}(z),\hat z)g(T^{-1})T^{1/2}d\nu(z)d\nu_L(T)\\ &=\int\int f(T,\hat z)g(T^{-1}{\rm len}(z))T^{1/2}d\nu(z)d\nu_L(T)\\ &=\int\int f(T,\hat z)g({\rm len}(z))d\nu(z)d\nu_L(T). \end{aligned}

The second equality is replacing ${z}$ by ${S_T\circ z}$ and using (7). The third is replacing T by ${T/{\rm len}(z)}$ and using (6), and the fourth is again replacing ${z}$ by ${S_T\circ z}$ and applying (7). Using the definition ${d\hat\nu(z)=g({\rm len}(z))d\nu(z)}$ gives (8).

Finally, I prove the alternative expression (9). Suppose that ${f\colon E\rightarrow{\mathbb R}_+}$ is measurable. As ${z=S_{{\rm len}(z)}\circ \hat z}$ then, applying (8) gives,

 \displaystyle \begin{aligned} \nu(f) &= \int f(S_{{\rm len}(z)}\circ\hat z)d\nu(z)\\ &= \int\int f(S_T\circ z)d\hat\nu(z)d\nu_L(T) \end{aligned}

as required. ⬜

Since ${{\rm len}(Z)}$ has the sigma-finite distribution ${\nu_L}$ under the excursion measure, the conditional expectation given ${{\rm len}(Z)}$ is well-defined. Theorem 3 can alternatively be stated as

 $\displaystyle {\mathbb E}_\nu[f(\hat Z)\vert{\rm len}(Z)] = \hat\nu(f)$

${\nu}$-almost everywhere, for all measurable functions ${f\colon\hat E\rightarrow{\mathbb R}_+}$. In a slightly more general form,

 $\displaystyle {\mathbb E}_\nu[f(Z)\vert{\rm len}(Z)=T]=\int f(S_T\circ z)d\hat\nu(z)$

for ${\nu_L}$ almost all values of T. This is simply (9) expressed using conditional expectations.

Theorem 3 suggests that it may be useful to further decompose the excursions into their length and normalizations, giving a random subset of ${{\mathbb R}_+\times{\mathbb R}_+\times\hat E}$.

 $\displaystyle \hat S = \left\{(a,{\rm len}(D^a), \hat D^a)\colon a\ge0, D^a\not=0\right\}.$

This defines a new point process ${\hat\xi(A) = \#(\hat S\cap A)}$, over ${A\in\mathcal B({\mathbb R}_+)\otimes\mathcal B({\mathbb R}_+)\otimes\hat{\mathcal E}}$, which I will refer to as the normalized drawdown point process. We can clearly convert back and forth easily between the drawdown point process and the normalized version. Defining,

 \displaystyle \begin{aligned} &f\colon{\mathbb R}_+\times{\mathbb R}_+\times \hat E\rightarrow{\mathbb R}_+\times E,\\ &f(a,T,z) = (a,S_T\circ z) \end{aligned}

Then, for any ${A\in\mathcal B({\mathbb R}_+)\otimes\mathcal E}$ we have ${\xi(A)=\hat\xi(f^{-1}(A))}$ and, conversely, for ${A\in\mathcal B({\mathbb R}_+)\otimes\mathcal B({\mathbb R}_+)\otimes\mathcal E}$, we have ${\hat\xi(A)=\xi(f(A))}$.

Corollary 4 If X is a standard Brownian motion, then the normalized drawdown point process is Poisson with intensity measure ${\lambda\times\nu_L\times\hat\nu}$.

Proof: As described above, we have ${\hat\xi(A)=\xi(f(A))}$ and, as ${f}$ is one-to-one and measurable with measurable inverse, the independent increments and Poisson property for ${\xi}$ carries directly across to ${\hat\xi}$ and, hence, it is a Poisson point process. The intensity measure is given by,

 \displaystyle \begin{aligned} {\mathbb E}\hat\xi(A)&={\mathbb E}\xi(f(A))\\ &=(\lambda\times\nu)(f(A))\\ &=(\lambda\times\nu_L\times\hat\nu)(A) \end{aligned}

as required. ⬜

Using the normalized drawdowns has the benefit that it only only involves the rather easily described measures ${\lambda,\nu_L}$ on the nonnegative reals, and a probability measure ${\hat\nu}$ on the excursion processes. These normalized excursion processes, known as ‘Brownian excursions’, will be looked at in more detail in a later post.