# Brownian Drawdowns

Here, I apply the theory outlined in the previous post to fully describe the drawdown point process of a standard Brownian motion. In fact, as I will show, the drawdowns can all be constructed from independent copies of a single ‘Brownian excursion’ stochastic process. Recall that we start with a continuous stochastic process X, assumed here to be Brownian motion, and define its running maximum as ${M_t=\sup_{s\le t}X_s}$ and drawdown process ${D_t=M_t-X_t}$. This is as in figure 1 above.

Next, ${D^a}$ was defined to be the drawdown ‘excursion’ over the interval at which the maximum process is equal to the value ${a \ge 0}$. Precisely, if we let ${\tau_a}$ be the first time at which X hits level ${a}$ and ${\tau_{a+}}$ be its right limit ${\tau_{a+}=\lim_{b\downarrow a}\tau_b}$ then,

 $\displaystyle D^a_t=D_{({\tau_a+t})\wedge\tau_{a+}}=a-X_{({\tau_a+t)}\wedge\tau_{a+}}.$

Next, a random set S is defined as the collection of all nonzero drawdown excursions indexed the running maximum,

 $\displaystyle S=\left\{(a,D^a)\colon D^a\not=0\right\}.$

The set of drawdown excursions corresponding to the sample path from figure 1 are shown in figure 2 below.

As described in the post on semimartingale local times, the joint distribution of the drawdown and running maximum ${(D,M)}$, of a Brownian motion, is identical to the distribution of its absolute value and local time at zero, ${(\lvert X\rvert,L^0)}$. Hence, the point process consisting of the drawdown excursions indexed by the running maximum, and the absolute value of the excursions from zero indexed by the local time, both have the same distribution. So, the theory described in this post applies equally to the excursions away from zero of a Brownian motion.

Before going further, let’s recap some of the technical details. The excursions lie in the space E of continuous paths ${z\colon{\mathbb R}_+\rightarrow{\mathbb R}}$, on which we define a canonical process Z by sampling the path at each time t, ${Z_t(z)=z_t}$. This space is given the topology of uniform convergence over finite time intervals (compact open topology), which makes it into a Polish space, and whose Borel sigma-algebra ${\mathcal E}$ is equal to the sigma-algebra generated by ${\{Z_t\}_{t\ge0}}$. As shown in the previous post, the counting measure ${\xi(A)=\#(S\cap A)}$ is a random point process on ${({\mathbb R}_+\times E,\mathcal B({\mathbb R}_+)\otimes\mathcal E)}$. In fact, it is a Poisson point process, so its distribution is fully determined by its intensity measure ${\mu={\mathbb E}\xi}$.

Theorem 1 If X is a standard Brownian motion, then the drawdown point process ${\xi}$ is Poisson with intensity measure ${\mu=\lambda\otimes\nu}$ where,

• ${\lambda}$ is the standard Lebesgue measure on ${{\mathbb R}_+}$.
• ${\nu}$ is a sigma-finite measure on E given by
 $\displaystyle \nu(f) = \lim_{\epsilon\rightarrow0}\epsilon^{-1}{\mathbb E}_\epsilon[f(Z^{\sigma})]$ (1)

for all bounded continuous continuous maps ${f\colon E\rightarrow{\mathbb R}}$ which vanish on paths of length less than L (some ${L > 0}$). The limit is taken over ${\epsilon > 0}$, ${{\mathbb E}_\epsilon}$ denotes expectation under the measure with respect to which Z is a Brownian motion started at ${\epsilon}$, and ${\sigma}$ is the first time at which Z hits 0. This measure satisfies the following properties,

• ${\nu}$-almost everywhere, there exists a time ${T > 0}$ such that ${Z > 0}$ on ${(0,T)}$ and ${Z=0}$ everywhere else.
• for each ${t > 0}$, the distribution of ${Z_t}$ has density
 $\displaystyle p_t(z)=z\sqrt{\frac 2{\pi t^3}}e^{-\frac{z^2}{2t}}$ (2)

over the range ${z > 0}$.

• over ${t > 0}$, ${Z_t}$ is Markov, with transition function of a Brownian motion stopped at zero.

# Criteria for Poisson Point Processes

If S is a finite random set in a standard Borel measurable space ${(E,\mathcal E)}$ satisfying the following two properties,

• if ${A,B\in\mathcal E}$ are disjoint, then the sizes of ${S\cap A}$ and ${S\cap B}$ are independent random variables,
• ${{\mathbb P}(x\in S)=0}$ for each ${x\in E}$,

then it is a Poisson point process. That is, the size of ${S\cap A}$ is a Poisson random variable for each ${A\in\mathcal E}$. This justifies the use of Poisson point processes in many different areas of probability and stochastic calculus, and provides a convenient method of showing that point processes are indeed Poisson. If the theorem applies, so that we have a Poisson point process, then we just need to compute the intensity measure to fully determine its distribution. The result above was mentioned in the previous post, but I give a precise statement and proof here. Continue reading “Criteria for Poisson Point Processes”

# Poisson Point Processes

The Poisson distribution models numbers of events that occur in a specific period of time given that, at each instant, whether an event occurs or not is independent of what happens at all other times. Examples which are sometimes cited as candidates for the Poisson distribution include the number of phone calls handled by a telephone exchange on a given day, the number of decays of a radio-active material, and the number of bombs landing in a given area during the London Blitz of 1940-41. The Poisson process counts events which occur according to such distributions.

More generally, the events under consideration need not just happen at specific times, but also at specific locations in a space E. Here, E can represent an actual geometric space in which the events occur, such as the spacial distribution of bombs dropped during the Blitz shown in figure 1, but can also represent other quantities associated with the events. In this example, E could represent the 2-dimensional map of London, or could include both space and time so that ${E=F\times{\mathbb R}}$ where, now, F represents the 2-dimensional map and E is used to record both time and location of the bombs. A Poisson point process is a random set of points in E, such that the number that lie within any measurable subset is Poisson distributed. The aim of this post is to introduce Poisson point processes together with the mathematical machinery to handle such random sets.

The choice of distribution is not arbitrary. Rather, it is a result of the independence of the number of events in each region of the space which leads to the Poisson measure, much like the central limit theorem leads to the ubiquity of the normal distribution for continuous random variables and of Brownian motion for continuous stochastic processes. A random finite subset S of a reasonably ‘nice’ (standard Borel) space E is a Poisson point process so long as it satisfies the properties,

• If ${A_1,\ldots,A_n}$ are pairwise-disjoint measurable subsets of E, then the sizes of ${S\cap A_1,\ldots,S\cap A_n}$ are independent.
• Individual points of the space each have zero probability of being in S. That is, ${{\mathbb P}(x\in S)=0}$ for each ${x\in E}$.

The proof of this important result will be given in a later post.

We have come across Poisson point processes previously in my stochastic calculus notes. Specifically, suppose that X is a cadlag ${{\mathbb R}^d}$-valued stochastic process with independent increments, and which is continuous in probability. Then, the set of points ${(t,\Delta X_t)}$ over times t for which the jump ${\Delta X}$ is nonzero gives a Poisson point process on ${{\mathbb R}_+\times{\mathbb R}^d}$. See lemma 4 of the post on processes with independent increments, which corresponds precisely to definition 5 given below. Continue reading “Poisson Point Processes”