For a continuous real-valued stochastic process with running maximum , consider its drawdown. This is just the amount that it has dropped since its maximum so far,
which is a nonnegative process hitting zero whenever the original process visits its running maximum. By looking at each of the individual intervals over which the drawdown is positive, we can break it down into a collection of finite excursions above zero. Furthermore, the running maximum is constant across each of these intervals, so it is natural to index the excursions by this maximum process. By doing so, we obtain a point process. In many cases, it is even a Poisson point process. I look at the drawdown in this post as an example of a point process which is a bit more interesting than the previous example given of the jumps of a cadlag process. By piecing the drawdown excursions back together, it is possible to reconstruct from the point process. At least, this can be done so long as the original process does not monotonically increase over any nontrivial intervals, so that there are no intervals with zero drawdown. As the point process indexes the drawdown by the running maximum, we can also reconstruct X as . The drawdown point process therefore gives an alternative description of our original process.
See figure 1 for the drawdown of the bitcoin price valued in US dollars between April and December 2020. As it makes more sense for this example, the drawdown is shown as a percent of the running maximum, rather than in dollars. This is equivalent to the approach taken in this post applied to the logarithm of the price return over the period, so that . It can be noted that, as the price was mostly increasing, the drawdown consists of a relatively large number of small excursions. If, on the other hand, it had declined, then it would have been dominated by a single large drawdown excursion covering most of the time period.
For simplicity, I will suppose that and that tends to infinity as t goes to infinity. Then, for each , define the random time at which the process first hits level ,
By construction, this is finite, increasing, and left-continuous in . Consider, also, the right limits . Each of the excursions on which the drawdown is positive is equal to one of the intervals . The excursion is defined as a continuous stochastic process equal to the drawdown starting at time and stopped at time ,
This is a continuous nonnegative real-valued process, which starts at zero and is equal to zero at all times after . Note that there uncountably many values for but, the associated excursion will be identically zero other than for the countably many times at which . We will only be interested in these nonzero excursions.
As usual, we work with respect to an underlying probability space , so that we have one path of the stochastic process X defined for each . Associated to this is the collection of drawdown excursions indexed by the running maximum.
As S is defined for each given sample path, it depends on the choice of , so is a countable random set. The sample paths of the excursions lie in the space of continuous functions , which I denote by E. For each time , I use to denote the value of the path sampled at time t,
Use to denote the sigma-algebra on E generated by the collection of maps , so that is the measurable space in which the excursion paths lie. It can be seen that is the Borel sigma-algebra generated by the open subsets of E, with respect to the topology of compact convergence. That is, the topology of uniform convergence on finite time intervals. As this is a complete separable metric space, it makes into a standard Borel space.
Lemma 1 The set S defines a simple point process on ,
for all .
From the definition of point processes, this simply means that is a measurable random variable for each and that there exists a sequence covering E such that are almost surely finite. The set of drawdowns for the point process corresponding to the bitcoin prices in figure 1 are shown in figure 2 below.
Before proceding with the proof of lemma 1, I introduce a useful bit of notation. The length of an excursion will be written as,
If , this means that for all and, so long as T is strictly positive, there exists times arbitrarily close to T at which is nonzero.
Proof of lemma 1: For any real , the path of the drawdown process D restricted to the range contains all of the drawdown excursions for defined on pairwise disjoint subintervals of this range. Hence, their lengths must sum to at most ,
Choosing , let consist of the paths of length at least L. Then,
which is finite. Hence, if we choose a sequence tending to infinity and tending to zero, then the sets all have finite -measure and, together with the collection of identically zero paths in E (which have zero -measure), these cover all of .
Only measurability of for each remains to be shown. Choose a sequence of positive times which is dense in . As each of the nontrivial excursion intervals must contain one of these times, we can list the drawdown excursions in a sequence,
We should exclude the trivial drawdown excursions and, to avoid double counting, we should exclude the times for which for some . This gives,
As is left-continuous in a and is right-continuous, we see that they are both jointly measurable. It follows that is jointly measurable, so that the expression above for is a countable sum over measurable terms, so is itself measurable. ⬜
So far, so good. The drawdowns of the stochastic process X can be conveniently represented by a point process on the space . However, for this simple fact to be really useful we should, at least, be able to say something about the distribution. In fact, there is a general result for strong Markov processes which says that the drawdowns form a Poisson point process and, so, its distribution is fully determined so long as we can compute the intensity measure . For the Markov property to have any meaning, we must assume the existence of an underlying filtration , which we assume satisfies usual conditions. In particular, that it is right-continuous.
Theorem 2 If X is strong Markov, then the drawdown point process is Poisson.
Proof: Recall that the strong Markov property means that there is a Markov transition function on such that, for every stopping time , the process is Markov with this transition function, restricting to the event that is finite. The proof will make use of the criteria given in theorem 4 of the previous post, which requires verifying two properties. For the first of these, fixing , we need to show that the point process on has independent increments. As in the proof of corollary 6 of the previous post, it is sufficient to show that are independent, for any finite sequence of disjoint intervals (). Listing these intervals in increasing order, we note that only depends on the paths of the process . As this is Markov with the given transition function with initial value , independently of , it is independent of the stopped process . As for depends only on this stopped process, we see that is independent of . Hence, by induction on k, we see that are independent, as required.
For the remaining property required to apply theorem 4 of the previous post, we need to show that almost surely, for each fixed . Equivalently, almost surely, for which it is sufficient to show that . As is the limit of , it is also a stopping time. As , the strong Markov property says that the two processes and have the same distribution. By construction, for arbitrarily small positive times t so, with probability one, the same is true of . Hence, almost surely, as required. ⬜
Once the intensity measure has been computed, this result enables us to quickly answer many questions about the drawdowns of a stong Markov process. For example, how many drawdowns of at least a given height K can there be before the process reaches a target level ? Theorem 2 tells us that it has a Poisson distribution with rate , where is the collection of paths satisfying . Similarly, how many drawdown periods of length at least T can we expect? Again, this has a Poisson distribution, with rate parameter where A is the collection of paths satisfying .
Let us go further and derive an expression for the intensity measure of the drawdown point process, completely determining its distribution. Given a Markov transition function , use for the unique probability measure on for which Z is Markov with respect to and with initial value . The notation will be used for expectation with respect to this measure. Also, let denote the first time that Z reaches level ,
So, for real numbers and , the process , under measure , is strong Markov with initial value and stopped at . Then, we see that is nonnegative with initial value and stopped as soon as it hits zero. Taking the limit as goes to zero gives the intensity measure for the drawdown excursions.
Theorem 3 If X is strong Markov, then the intensity measure of is given by,
Here, is any continuous and bounded function such that there are reals with whenever either or . Then, is -integrable and (1) holds.
Proof: Without loss of generality, we suppose that is nonnegative and bounded by 1. For any and , write
By the strong Markov property, the process has distribution and, so, the right hand side of (1) can be written as
We just need to show that this is finite and equal to .
Start by looking at the expression inside the expectation on the the right hand side of (2). Fixing a sample path for X, we recall that , except when , which can only occur when . Fixing any , let be the finite collection of values in the interval for which . As is left continuous and increasing in with jump size , for all sufficiently small values of it follows that unless or the interval contains one of the levels . Hence, for sufficiently small ,
The limit holds as by continuity of , since uniformly as and . If we were to substitute this limit into the expectation on the right hand side of (2) then we obtain (1) as required.
To complete the proof, we need to verify the validity of commuting the limit with the expectation on the right hand side of (2), and that the result is finite. Dominated convergence will be used. From the bound on , the expression inside the expectation is bounded by,
which we need to show is bounded above by an integrable random variable, independently of . We set equal to , but with the jumps capped by L,
Then, whenever , so (3) can be rewritten as,
It just remains to show that is integrable. The following alternative expression for will help where, for integer , I am writing ,
Note that has independent increments since for , is the first time at which hits which, by the strong Markov property, is independent of . Hence, for each n then is an independent sequence of random variables. Choosing sufficiently small that ,
Taking the limit as n goes to infinity and applying Fatou’s lemma on the right hand side gives,
As is finite, the left hand side is strictly positive and, hence, is finite as required. ⬜
An obvious case of a continuous strong Markov process to which we may want to apply the theory above is standard Brownian motion. Then, theorem 3 allows us to explicitly compute the intensity measure, which I will show in a follow-up post.