Brownian bridges were described in a previous post, along with various different methods by which they can be constructed. Since a Brownian bridge on an interval is continuous and equal to zero at both endpoints, we can consider extending to the entire real line by partitioning the real numbers into intervals of length T and replicating the path of the process across each of these. This will result in continuous and periodic sample paths, suggesting another method of representing Brownian bridges. That is, by Fourier expansion. As we will see, the Fourier coefficients turn out to be independent normal random variables, giving a useful alternative method of constructing a Brownian bridge.
There are actually a couple of distinct Fourier expansions that can be used, which depends on precisely how we consider extending the sample paths to the real line. A particularly simple result is given by the sine series, which I describe first. This is shown for an example Brownian bridge sample path in figure 1 above, which plots the sequence of approximations formed by truncating the series after a small number of terms. This tends uniformly to the sample path, although it is quite slow to converge as should be expected when approximating such a rough path by smooth functions. Also plotted, is the series after the first 100 terms, by which time the approximation is quite close to the target. For simplicity, I only consider standard Brownian bridges, which are defined on the unit interval . This does not reduce the generality, since bridges on an interval can be expressed as scaled versions of standard Brownian bridges.
Theorem 1 A standard Brownian bridge B can be decomposed as
over , where is an IID sequence of standard normals. This series converges uniformly in t, both with probability one and in the norm for all .
A Brownian bridge can be defined as standard Brownian motion conditioned on hitting zero at a fixed future time T, or as any continuous process with the same distribution as this. Rather than conditioning, a slightly easier approach is to subtract a linear term from the Brownian motion, chosen such that the resulting process hits zero at the time T. This is equivalent, but has the added benefit of being independent of the original Brownian motion at all later times.
Lemma 1 Let X be a standard Brownian motion and be a fixed time. Then, the process
Here, I apply the theory outlined in the previous post to fully describe the drawdown point process of a standard Brownian motion. In fact, as I will show, the drawdowns can all be constructed from independent copies of a single ‘Brownian excursion’ stochastic process. Recall that we start with a continuous stochastic process X, assumed here to be Brownian motion, and define its running maximum as and drawdown process . This is as in figure 1 above.
Next, was defined to be the drawdown ‘excursion’ over the interval at which the maximum process is equal to the value . Precisely, if we let be the first time at which X hits level and be its right limit then,
Next, a random set S is defined as the collection of all nonzero drawdown excursions indexed the running maximum,
The set of drawdown excursions corresponding to the sample path from figure 1 are shown in figure 2 below.
As described in the post on semimartingale local times, the joint distribution of the drawdown and running maximum , of a Brownian motion, is identical to the distribution of its absolute value and local time at zero, . Hence, the point process consisting of the drawdown excursions indexed by the running maximum, and the absolute value of the excursions from zero indexed by the local time, both have the same distribution. So, the theory described in this post applies equally to the excursions away from zero of a Brownian motion.
Before going further, let’s recap some of the technical details. The excursions lie in the space E of continuous paths , on which we define a canonical process Z by sampling the path at each time t, . This space is given the topology of uniform convergence over finite time intervals (compact open topology), which makes it into a Polish space, and whose Borel sigma-algebra is equal to the sigma-algebra generated by . As shown in the previous post, the counting measure is a random point process on . In fact, it is a Poisson point process, so its distribution is fully determined by its intensity measure .
Theorem 1 If X is a standard Brownian motion, then the drawdown point process is Poisson with intensity measure where,
is the standard Lebesgue measure on .
is a sigma-finite measure on E given by
for all bounded continuous continuous maps which vanish on paths of length less than L (some ). The limit is taken over , denotes expectation under the measure with respect to which Z is a Brownian motion started at , and is the first time at which Z hits 0. This measure satisfies the following properties,
-almost everywhere, there exists a time such that on and everywhere else.
for each , the distribution of has density
over the range .
over , is Markov, with transition function of a Brownian motion stopped at zero.
For a continuous real-valued stochastic process with running maximum , consider its drawdown. This is just the amount that it has dropped since its maximum so far,
which is a nonnegative process hitting zero whenever the original process visits its running maximum. By looking at each of the individual intervals over which the drawdown is positive, we can break it down into a collection of finite excursions above zero. Furthermore, the running maximum is constant across each of these intervals, so it is natural to index the excursions by this maximum process. By doing so, we obtain a point process. In many cases, it is even a Poisson point process. I look at the drawdown in this post as an example of a point process which is a bit more interesting than the previous example given of the jumps of a cadlag process. By piecing the drawdown excursions back together, it is possible to reconstruct from the point process. At least, this can be done so long as the original process does not monotonically increase over any nontrivial intervals, so that there are no intervals with zero drawdown. As the point process indexes the drawdown by the running maximum, we can also reconstruct X as . The drawdown point process therefore gives an alternative description of our original process.
See figure 1 for the drawdown of the bitcoin price valued in US dollars between April and December 2020. As it makes more sense for this example, the drawdown is shown as a percent of the running maximum, rather than in dollars. This is equivalent to the approach taken in this post applied to the logarithm of the price return over the period, so that . It can be noted that, as the price was mostly increasing, the drawdown consists of a relatively large number of small excursions. If, on the other hand, it had declined, then it would have been dominated by a single large drawdown excursion covering most of the time period.
For simplicity, I will suppose that and that tends to infinity as t goes to infinity. Then, for each , define the random time at which the process first hits level ,
By construction, this is finite, increasing, and left-continuous in . Consider, also, the right limits . Each of the excursions on which the drawdown is positive is equal to one of the intervals . The excursion is defined as a continuous stochastic process equal to the drawdown starting at time and stopped at time ,
This is a continuous nonnegative real-valued process, which starts at zero and is equal to zero at all times after . Note that there uncountably many values for but, the associated excursion will be identically zero other than for the countably many times at which . We will only be interested in these nonzero excursions.
As usual, we work with respect to an underlying probability space , so that we have one path of the stochastic process X defined for each . Associated to this is the collection of drawdown excursions indexed by the running maximum.
As S is defined for each given sample path, it depends on the choice of , so is a countable random set. The sample paths of the excursions lie in the space of continuous functions , which I denote by E. For each time , I use to denote the value of the path sampled at time t,
Use to denote the sigma-algebra on E generated by the collection of maps , so that is the measurable space in which the excursion paths lie. It can be seen that is the Borel sigma-algebra generated by the open subsets of E, with respect to the topology of compact convergence. That is, the topology of uniform convergence on finite time intervals. As this is a complete separable metric space, it makes into a standard Borel space.
Lemma 1 The set S defines a simple point process on ,
for all .
From the definition of point processes, this simply means that is a measurable random variable for each and that there exists a sequence covering E such that are almost surely finite. The set of drawdowns for the point process corresponding to the bitcoin prices in figure 1 are shown in figure 2 below.
If S is a finite random set in a standard Borel measurable space satisfying the following two properties,
if are disjoint, then the sizes of and are independent random variables,
for each ,
then it is a Poisson point process. That is, the size of is a Poisson random variable for each . This justifies the use of Poisson point processes in many different areas of probability and stochastic calculus, and provides a convenient method of showing that point processes are indeed Poisson. If the theorem applies, so that we have a Poisson point process, then we just need to compute the intensity measure to fully determine its distribution. The result above was mentioned in the previous post, but I give a precise statement and proof here. Continue reading “Criteria for Poisson Point Processes”→
The Poisson distribution models numbers of events that occur in a specific period of time given that, at each instant, whether an event occurs or not is independent of what happens at all other times. Examples which are sometimes cited as candidates for the Poisson distribution include the number of phone calls handled by a telephone exchange on a given day, the number of decays of a radio-active material, and the number of bombs landing in a given area during the London Blitz of 1940-41. The Poisson process counts events which occur according to such distributions.
More generally, the events under consideration need not just happen at specific times, but also at specific locations in a space E. Here, E can represent an actual geometric space in which the events occur, such as the spacial distribution of bombs dropped during the Blitz shown in figure 1, but can also represent other quantities associated with the events. In this example, E could represent the 2-dimensional map of London, or could include both space and time so that where, now, F represents the 2-dimensional map and E is used to record both time and location of the bombs. A Poisson point process is a random set of points in E, such that the number that lie within any measurable subset is Poisson distributed. The aim of this post is to introduce Poisson point processes together with the mathematical machinery to handle such random sets.
The choice of distribution is not arbitrary. Rather, it is a result of the independence of the number of events in each region of the space which leads to the Poisson measure, much like the central limit theorem leads to the ubiquity of the normal distribution for continuous random variables and of Brownian motion for continuous stochastic processes. A random finite subset S of a reasonably ‘nice’ (standard Borel) space E is a Poisson point process so long as it satisfies the properties,
If are pairwise-disjoint measurable subsets of E, then the sizes of are independent.
Individual points of the space each have zero probability of being in S. That is, for each .
The proof of this important result will be given in a later post.
We have come across Poisson point processes previously in my stochastic calculus notes. Specifically, suppose that X is a cadlag -valued stochastic process with independent increments, and which is continuous in probability. Then, the set of points over times t for which the jump is nonzero gives a Poisson point process on . See lemma 4 of the post on processes with independent increments, which corresponds precisely to definition 5 given below. Continue reading “Poisson Point Processes”→
The local time of a semimartingale at a level x is a continuous increasing process, giving a measure of the amount of time that the process spends at the given level. As the definition involves stochastic integrals, it was only defined up to probability one. This can cause issues if we want to simultaneously consider local times at all levels. As x can be any real number, it can take uncountably many values and, as a union of uncountably many zero probability sets can have positive measure or, even, be unmeasurable, this is not sufficient to determine the entire local time ‘surface’
for almost all . This is the common issue of choosing good versions of processes. In this case, we already have a continuous version in the time index but, as yet, have not constructed a good version jointly in the time and level. This issue arose in the post on the Ito–Tanaka–Meyer formula, for which we needed to choose a version which is jointly measurable. Although that was sufficient there, joint measurability is still not enough to uniquely determine the full set of local times, up to probability one. The ideal situation is when a version exists which is jointly continuous in both time and level, in which case we should work with this choice. This is always possible for continuous local martingales.
Theorem 1 Let X be a continuous local martingale. Then, the local times
have a modification which is jointly continuous in x and t. Furthermore, this is almost surely -Hölder continuous w.r.t. x, for all and over all bounded regions for t.
One of the common themes throughout the theory of continuous-time stochastic processes, is the importance of choosing good versions of processes. Specifying the finite distributions of a process is not sufficient to determine its sample paths so, if a continuous modification exists, then it makes sense to work with that. A relatively straightforward criterion ensuring the existence of a continuous version is provided by Kolmogorov’s continuity theorem.
For any positive real number , a map between metric spaces E and F is said to be -Hölder continuous if there exists a positive constant C satisfying
for all . The smallest value of C satisfying this inequality is known as the -Hölder coefficient of . Hölder continuous functions are always continuous and, at least on bounded spaces, is a stronger property for larger values of the coefficient . So, if E is a bounded metric space and , then every -Hölder continuous map from E is also -Hölder continuous. In particular, 1-Hölder and Lipschitz continuity are equivalent.
Kolmogorov’s theorem gives simple conditions on the pairwise distributions of a process which guarantee the existence of a continuous modification but, also, states that the sample paths are almost surely locally Hölder continuous. That is, they are almost surely Hölder continuous on every bounded interval. To start with, we look at real-valued processes. Throughout this post, we work with repect to a probability space . There is no need to assume the existence of any filtration, since they play no part in the results here
Theorem 1 (Kolmogorov) Let be a real-valued stochastic process such that there exists positive constants satisfying
for all . Then, X has a continuous modification which, with probability one, is locally -Hölder continuous for all .
Ito’s lemma is one of the most important and useful results in the theory of stochastic calculus. This is a stochastic generalization of the chain rule, or change of variables formula, and differs from the classical deterministic formulas by the presence of a quadratic variation term. One drawback which can limit the applicability of Ito’s lemma in some situations, is that it only applies for twice continuously differentiable functions. However, the quadratic variation term can alternatively be expressed using local times, which relaxes the differentiability requirement. This generalization of Ito’s lemma was derived by Tanaka and Meyer, and applies to one dimensional semimartingales.
The local time of a stochastic process X at a fixed level x can be written, very informally, as an integral of a Dirac delta function with respect to the continuous part of the quadratic variation ,
This was explained in an earlier post. As the Dirac delta is only a distribution, and not a true function, equation (1) is not really a well-defined mathematical expression. However, as we saw, with some manipulation a valid expression can be obtained which defines the local time whenever X is a semimartingale.
Going in a slightly different direction, we can try multiplying (1) by a bounded measurable function and integrating over x. Commuting the order of integration on the right hand side, and applying the defining property of the delta function, that is equal to , gives
By eliminating the delta function, the right hand side has been transformed into a well-defined expression. In fact, it is now the left side of the identity that is a problem, since the local time was only defined up to probability one at each level x. Ignoring this issue for the moment, recall the version of Ito’s lemma for general non-continuous semimartingales,
where . Equation (2) allows us to express this quadratic variation term using local times,
The benefit of this form is that, even though it still uses the second derivative of , it is only really necessary for this to exist in a weaker, measure theoretic, sense. Suppose that is convex, or a linear combination of convex functions. Then, its right-hand derivative exists, and is itself of locally finite variation. Hence, the Stieltjes integral exists. The infinitesimal is alternatively written and, in the twice continuously differentiable case, equals . Then,
Fubini’s theorem states that, subject to precise conditions, it is possible to switch the order of integration when computing double integrals. In the theory of stochastic calculus, we also encounter double integrals and would like to be able to commute their order. However, since these can involve stochastic integration rather than the usual deterministic case, the classical results are not always applicable. To help with such cases, we could do with a new stochastic version of Fubini’s theorem. Here, I will consider the situation where one integral is of the standard kind with respect to a finite measure, and the other is stochastic. To start, recall the classical Fubini theorem.
Theorem 1 (Fubini) Let and be finite measure spaces, and be a bounded -measurable function. Then,