Drawdown Point Processes

bitcoin drawdown
Figure 1: Bitcoin price and drawdown between April and December 2020

For a continuous real-valued stochastic process {\{X_t\}_{t\ge0}} with running maximum {M_t=\sup_{s\le t}X_s}, consider its drawdown. This is just the amount that it has dropped since its maximum so far,

\displaystyle  D_t=M_t-X_t,

which is a nonnegative process hitting zero whenever the original process visits its running maximum. By looking at each of the individual intervals over which the drawdown is positive, we can break it down into a collection of finite excursions above zero. Furthermore, the running maximum is constant across each of these intervals, so it is natural to index the excursions by this maximum process. By doing so, we obtain a point process. In many cases, it is even a Poisson point process. I look at the drawdown in this post as an example of a point process which is a bit more interesting than the previous example given of the jumps of a cadlag process. By piecing the drawdown excursions back together, it is possible to reconstruct {D_t} from the point process. At least, this can be done so long as the original process does not monotonically increase over any nontrivial intervals, so that there are no intervals with zero drawdown. As the point process indexes the drawdown by the running maximum, we can also reconstruct X as {X_t=M_t-D_t}. The drawdown point process therefore gives an alternative description of our original process.

See figure 1 for the drawdown of the bitcoin price valued in US dollars between April and December 2020. As it makes more sense for this example, the drawdown is shown as a percent of the running maximum, rather than in dollars. This is equivalent to the approach taken in this post applied to the logarithm of the price return over the period, so that {X_t=\log(B_t/B_0)}. It can be noted that, as the price was mostly increasing, the drawdown consists of a relatively large number of small excursions. If, on the other hand, it had declined, then it would have been dominated by a single large drawdown excursion covering most of the time period.

For simplicity, I will suppose that {X_0=0} and that {M_t} tends to infinity as t goes to infinity. Then, for each {a\ge0}, define the random time at which the process first hits level {a},

\displaystyle  \tau_a=\inf\left\{t\ge 0\colon X_t\ge a\right\}.

By construction, this is finite, increasing, and left-continuous in {a}. Consider, also, the right limits {\tau_{a+}=\lim_{b\downarrow0}\tau_b}. Each of the excursions on which the drawdown is positive is equal to one of the intervals {(\tau_a,\tau_{a+})}. The excursion is defined as a continuous stochastic process {\{D^a_t\}_{t\ge0}} equal to the drawdown starting at time {\tau_a} and stopped at time {\tau_{a+}},

\displaystyle  D^a_t=D_{(\tau_a+t)\wedge\tau_{a+}}=a-X_{(\tau_a+t)\wedge\tau_{a+}}.

This is a continuous nonnegative real-valued process, which starts at zero and is equal to zero at all times after {\tau_{a+}-\tau_a}. Note that there uncountably many values for {a} but, the associated excursion will be identically zero other than for the countably many times at which {\tau_{a+} > \tau_a}. We will only be interested in these nonzero excursions.

As usual, we work with respect to an underlying probability space {(\Omega,\mathcal F,{\mathbb P})}, so that we have one path of the stochastic process X defined for each {\omega\in\Omega}. Associated to this is the collection of drawdown excursions indexed by the running maximum.

\displaystyle  S=\left\{(a,D^a)\colon a\ge0,\ D^a\not=0\right\}.

As S is defined for each given sample path, it depends on the choice of {\omega\in\Omega}, so is a countable random set. The sample paths of the excursions {D^a} lie in the space of continuous functions {{\mathbb R}_+\rightarrow{\mathbb R}}, which I denote by E. For each time {t\ge0}, I use {Z_t} to denote the value of the path sampled at time t,

\displaystyle  \begin{aligned} &E=\left\{z\colon {\mathbb R}_+\rightarrow{\mathbb R}{\rm\ is\ continuous}\right\}.\\ &Z_t\colon E\rightarrow{\mathbb R},\\ & Z_t(z)=z_t. \end{aligned}

Use {\mathcal E} to denote the sigma-algebra on E generated by the collection of maps {\{Z_t\colon t\ge0\}}, so that {(E,\mathcal E)} is the measurable space in which the excursion paths lie. It can be seen that {\mathcal E} is the Borel sigma-algebra generated by the open subsets of E, with respect to the topology of compact convergence. That is, the topology of uniform convergence on finite time intervals. As this is a complete separable metric space, it makes {(E,\mathcal E)} into a standard Borel space.

Lemma 1 The set S defines a simple point process {\xi} on {{\mathbb R}_+\times E},

\displaystyle  \xi(A)=\#(S\cap A)

for all {A\in\mathcal B({\mathbb R}_+)\otimes\mathcal E}.

From the definition of point processes, this simply means that {\xi(A)} is a measurable random variable for each {A\in \mathcal B({\mathbb R}_+)\otimes\mathcal E} and that there exists a sequence {A_n\in \mathcal B({\mathbb R}_+)\otimes\mathcal E} covering E such that {\xi(A_n)} are almost surely finite. The set of drawdowns for the point process corresponding to the bitcoin prices in figure 1 are shown in figure 2 below.

BTCDrawdownPP
Figure 2: Bitcoin price drawdowns between April and December 2020

Before proceding with the proof of lemma 1, I introduce a useful bit of notation. The length of an excursion {z\in E} will be written as,

\displaystyle  {\rm len}(z)=\inf\{T\ge0\colon z_t=0{\rm\ for\ all\ }t\ge T\}.

If {T={\rm len}(z)}, this means that {z_t=0} for all {t\ge T} and, so long as T is strictly positive, there exists times {t < T} arbitrarily close to T at which {z_t} is nonzero.

Proof of lemma 1: For any real {K > 0}, the path of the drawdown process D restricted to the range {[0,\tau_K]} contains all of the drawdown excursions {D^a} for {a < K} defined on pairwise disjoint subintervals of this range. Hence, their lengths must sum to at most {\tau_K},

\displaystyle  \sum_{a < K}{\rm len}(D^a)\le \tau_K.

Choosing {L > 0}, let {A_L\in\mathcal E} consist of the paths of length at least L. Then,

\displaystyle  \begin{aligned} \xi([0,K)\times A_L) &=\sum_{a < K}1_{\{{\rm len}(D^a)\ge L\}}\\ &\le L^{-1}\sum_{a < K}{\rm len}(D^a)\\ &\le L^{-1}\tau_K, \end{aligned}

which is finite. Hence, if we choose a sequence {K_n > 0} tending to infinity and {L_n} tending to zero, then the sets {[0,K_n)\times A_{L_n}\in\mathcal B({\mathbb R}_+)\otimes\mathcal E} all have finite {\xi}-measure and, together with the collection of identically zero paths in E (which have zero {\xi}-measure), these cover all of {{\mathbb R}_+\times E}.

Only measurability of {\xi(A)} for each {A\in\mathcal B({\mathbb R}_+)\otimes\mathcal E} remains to be shown. Choose a sequence of positive times {t_1,t_2,\ldots} which is dense in {{\mathbb R}_+}. As each of the nontrivial excursion intervals must contain one of these times, we can list the drawdown excursions in a sequence,

\displaystyle  D^{M_{t_1}},D^{M_{t_2}},D^{M_{t_3}},\ldots.

We should exclude the trivial drawdown excursions and, to avoid double counting, we should exclude the times {t_n} for which {M_{t_n}=M_{t_m}} for some {m < n}. This gives,

\displaystyle  \xi(A)=\sum_{n=1}^\infty 1_{\{M_{t_n}\not= M_{t_m}{\rm\ for\ all\ }m < n\}}1_{\{D^{M_{t_n}}\not=0\}}1_{\{(M_{t_n},D^{M_{t_n}})\in A\}}.

As {\tau_a(\omega)} is left-continuous in a and {\tau_{a+}(\omega)} is right-continuous, we see that they are both jointly measurable. It follows that {D^a_t(\omega)} is jointly measurable, so that the expression above for {\xi(A)} is a countable sum over measurable terms, so is itself measurable. ⬜

So far, so good. The drawdowns of the stochastic process X can be conveniently represented by a point process on the space {{\mathbb R}_+\times E}. However, for this simple fact to be really useful we should, at least, be able to say something about the distribution. In fact, there is a general result for strong Markov processes which says that the drawdowns form a Poisson point process and, so, its distribution is fully determined so long as we can compute the intensity measure {{\mathbb E}\xi}. For the Markov property to have any meaning, we must assume the existence of an underlying filtration {\{\mathcal F\}_{t\ge0}}, which we assume satisfies usual conditions. In particular, that it is right-continuous.

Theorem 2 If X is strong Markov, then the drawdown point process {\xi} is Poisson.

Proof: Recall that the strong Markov property means that there is a Markov transition function {\{P_t\}_{t\ge0}} on {{\mathbb R}} such that, for every stopping time {\tau}, the process {t\mapsto X_{\tau+t}} is Markov with this transition function, restricting to the event that {\tau} is finite. The proof will make use of the criteria given in theorem 4 of the previous post, which requires verifying two properties. For the first of these, fixing {U\in\mathcal B({\mathbb R}_+)\otimes\mathcal E}, we need to show that the point process {\eta(A)=\xi((A\times E)\cap U)} on {{\mathbb R}_+} has independent increments. As in the proof of corollary 6 of the previous post, it is sufficient to show that {\eta_k\equiv\eta([a_k,b_k))} are independent, for any finite sequence of disjoint intervals {[a_k,b_k)} ({k=1,\ldots,n}). Listing these intervals in increasing order, we note that {\eta_k} only depends on the paths of the process {t\mapsto X_{\tau_{a_k}+t}}. As this is Markov with the given transition function with initial value {a_k}, independently of {\mathcal F_{\tau_{a_k}}}, it is independent of the stopped process {X^{\tau_{a_k}}}. As {\eta_j} for {j < k} depends only on this stopped process, we see that {\eta_k} is independent of {\eta_1,\ldots,\eta_{k-1}}. Hence, by induction on k, we see that {\eta_1,\ldots,\eta_n} are independent, as required.

For the remaining property required to apply theorem 4 of the previous post, we need to show that {\xi(\{a\}\times E)=0} almost surely, for each fixed {a\ge0}. Equivalently, {D^a=0} almost surely, for which it is sufficient to show that {\tau_{a+}=\tau_a}. As {\tau_{a+}} is the limit of {\tau_{a+1/n}}, it is also a stopping time. As {X_{\tau_a}=X_{\tau_{a+}}=a}, the strong Markov property says that the two processes {X_{\tau_a+t}} and {X_{\tau_{a+}+t}} have the same distribution. By construction, {X_{\tau_{a+}+t} > a} for arbitrarily small positive times t so, with probability one, the same is true of {X_{\tau_a+t}}. Hence, {\tau_{a+}=\tau_a} almost surely, as required. ⬜

Once the intensity measure {\mu={\mathbb E}\xi} has been computed, this result enables us to quickly answer many questions about the drawdowns of a stong Markov process. For example, how many drawdowns of at least a given height K can there be before the process reaches a target level {a}? Theorem 2 tells us that it has a Poisson distribution with rate {\mu([0,a]\times A)}, where {A\in\mathcal E} is the collection of paths {z\in E} satisfying {\sup_tz_t\ge K}. Similarly, how many drawdown periods of length at least T can we expect? Again, this has a Poisson distribution, with rate parameter {\mu([0,a]\times A)} where A is the collection of paths satisfying {{\rm len}(z)\ge T}.

Let us go further and derive an expression for the intensity measure of the drawdown point process, completely determining its distribution. Given a Markov transition function {\{P_t\}_{t\ge0}}, use {{\mathbb P}_a} for the unique probability measure on {(E,\mathcal E)} for which Z is Markov with respect to {P_t} and with initial value {Z_t=a}. The notation {{\mathbb E}_a[\cdot]} will be used for expectation with respect to this measure. Also, let {\sigma_a} denote the first time that Z reaches level {a},

\displaystyle  \sigma_a=\inf\left\{t\ge0\colon Z_t\ge a\right\}.

So, for real numbers {a} and {\epsilon > 0}, the process {Z^{\sigma_{a+\epsilon}}_t=Z_{t\wedge\tau_{a+\epsilon}}}, under measure {{\mathbb P}_a}, is strong Markov with initial value {a} and stopped at {a+\epsilon}. Then, we see that {a+\epsilon-Z^{\sigma_{a+\epsilon}}} is nonnegative with initial value {\epsilon} and stopped as soon as it hits zero. Taking the limit as {\epsilon} goes to zero gives the intensity measure for the drawdown excursions.

Theorem 3 If X is strong Markov, then the intensity measure of {\xi} is given by,

\displaystyle  {\mathbb E}\xi(f)=\lim_{\epsilon\rightarrow0}\int_0^\infty \epsilon^{-1}{\mathbb E}_a[f(a,a+\epsilon-Z^{\sigma_{a+\epsilon}})]da. (1)

Here, {f\colon {\mathbb R}_+\times E\rightarrow{\mathbb R}} is any continuous and bounded function such that there are reals {K,L > 0} with {f(a,z)=0} whenever either {a > K} or {{\rm len}(z) < L}. Then, {f} is {{\mathbb E}\xi}-integrable and (1) holds.

Proof: Without loss of generality, we suppose that {f} is nonnegative and bounded by 1. For any {a\ge0} and {\epsilon > 0}, write

\displaystyle  D^{a,\epsilon}_t = a+\epsilon - X_{(\tau_a+t)\wedge\tau_{a+\epsilon}}.

By the strong Markov property, the process {X_{\tau_a+t}} has distribution {{\mathbb P}_a} and, so, the right hand side of (1) can be written as

\displaystyle  \lim_{\epsilon\rightarrow0}\int_0^\infty \epsilon^{-1}{\mathbb E}[f(a,D^{a,\epsilon})]da = \lim_{\epsilon\rightarrow0}{\mathbb E}\left[\epsilon^{-1} \int_0^\infty f(a,D^{a,\epsilon})da\right] (2)

We just need to show that this is finite and equal to {E\xi(f)}.

Start by looking at the expression inside the expectation on the the right hand side of (2). Fixing a sample path for X, we recall that {f(a,D^{a,\epsilon})=0}, except when {{\rm len}(D^{a,\epsilon})\ge L}, which can only occur when {\tau_{a+\epsilon}-\tau_a\ge L}. Fixing any {0 < L^\prime < L}, let {a_1,\ldots,a_n} be the finite collection of values in the interval {[0,K]} for which {\tau_{a+}-\tau_a\ge L^\prime}. As {\tau_a} is left continuous and increasing in {a} with jump size {\tau_{a+}-\tau_a}, for all sufficiently small values of {\epsilon} it follows that {\tau_{a+\epsilon}-\tau_a < L} unless {a > K} or the interval {[a,a+\epsilon]} contains one of the levels {a_k}. Hence, for sufficiently small {\epsilon},

\displaystyle  \begin{aligned} \epsilon^{-1} \int_0^\infty f(a,D^{a,\epsilon})da &=\sum_{k=1}^n\epsilon^{-1}\int_{a_k-\epsilon}^{a_k} f(a,D^{a,\epsilon})da\\ &\rightarrow \sum_{k=1}^n\epsilon^{-1} f(a_k,D^{a_k})\\ &=\sum_{a\in[0,K]}f(a,D^a) =\xi(f). \end{aligned}

The limit holds as {\epsilon\rightarrow0} by continuity of {f}, since {D^{a,\epsilon}\rightarrow D^{a_k}} uniformly as {a\uparrow a_k} and {a+\epsilon\downarrow a_k}. If we were to substitute this limit into the expectation on the right hand side of (2) then we obtain (1) as required.

To complete the proof, we need to verify the validity of commuting the limit with the expectation on the right hand side of (2), and that the result is finite. Dominated convergence will be used. From the bound on {f}, the expression inside the expectation is bounded by,

\displaystyle  \epsilon^{-1}\int_0^K1_{\{\tau_{a+\epsilon}-\tau_a\ge L\}}da, (3)

which we need to show is bounded above by an integrable random variable, independently of {\epsilon}. We set {\tau^\prime_a} equal to {\tau_a}, but with the jumps capped by L,

\displaystyle  \tau^\prime_a=\tau_a-\sum_{b < a}(\tau_{b+}-\tau_b-L)_+.

Then, {\tau^\prime_{a+\epsilon}-\tau^\prime_a\ge L} whenever {\tau_{a+\epsilon}-\tau_a\ge L}, so (3) can be rewritten as,

\displaystyle  \begin{aligned} \epsilon^{-1}\int_0^K1_{\{\tau^\prime_{a+\epsilon}-\tau^\prime_a\ge L\}}da &\le \epsilon^{-1}\int_0^{K-\epsilon}L^{-1}(\tau^\prime_{a+\epsilon}-\tau^\prime_a)da+1\\ &=\epsilon^{-1}L^{-1}\int_0^\epsilon(\tau^\prime_{K-\epsilon+x}-\tau^\prime_x)dx+1\\ &\le L^{-1}\tau^\prime_K+1. \end{aligned}

It just remains to show that {\tau^\prime_K} is integrable. The following alternative expression for {\tau^\prime_K} will help where, for integer {1\le m\le n}, I am writing {U_{mn}=(\tau_{mK/n}-\tau_{(m-1)K/n})\wedge L},

\displaystyle  \tau^\prime_K=\lim_{n\rightarrow\infty}\sum_{m=1}^nU_{mn}.

Note that {a\mapsto\tau_a} has independent increments since for {a < b}, {\tau_b-\tau_a} is the first time at which {X_{\tau_a+t}} hits {b} which, by the strong Markov property, is independent of {\mathcal F_{\tau_a}}. Hence, for each n then {m\mapsto U_{mn}} is an independent sequence of random variables. Choosing {\lambda > 0} sufficiently small that {e^{-2\lambda L} \le 1-\lambda L},

\displaystyle  \begin{aligned} {\mathbb E}\left[e^{-2\lambda\sum_m U_{mn}}\right] &=\prod_m{\mathbb E}\left[e^{-2\lambda U_{mn}}\right]\\ &\le\prod_m\left(1-\lambda{\mathbb E}[U_{mn}]\right)\\ &\le\prod_m e^{-\lambda{\mathbb E}[U_{mn}]} =e^{-\lambda{\mathbb E}[\sum_mU_{mn}]} \end{aligned}

Taking the limit as n goes to infinity and applying Fatou’s lemma on the right hand side gives,

\displaystyle  {\mathbb E}\left[e^{-2\lambda\tau^\prime_K}\right] \le e^{-\lambda{\mathbb E}[\tau^\prime_K]}.

As {\tau^\prime_K} is finite, the left hand side is strictly positive and, hence, {{\mathbb E}[\tau^\prime_K]} is finite as required. ⬜

An obvious case of a continuous strong Markov process to which we may want to apply the theory above is standard Brownian motion. Then, theorem 3 allows us to explicitly compute the intensity measure, which I will show in a follow-up post.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s