Having previously looked at Brownian bridges and excursions, I now turn to a third kind of process which can be constructed either as a conditioned Brownian motion or by extracting a segment from Brownian motion sample paths. Specifically, the *Brownian meander*, which is a Brownian motion conditioned to be positive over a unit time interval. Since this requires conditioning on a zero probability event, care must be taken. Instead, it is cleaner to start with an alternative definition by appropriately scaling a segment of a Brownian motion.

For a fixed positive times *T*, consider the last time *σ* before *T* at which a Brownian motion *X* is equal to zero,

(1) |

On interval [*σ*, *T*], the path of *X* will start from 0 and then be either strictly positive or strictly negative, and we may as well restrict to the positive case by taking absolute values. Scaling invariance says that *c*^{-1/2}*X*_{ct} is itself a standard Brownian motion for any positive constant *c*. So, scaling the path of *X* on [*σ*, 1] to the unit interval defines a process

(2) |

over 0 ≤ *t* ≤ 1; This starts from zero and is strictly positive at all other times.

Scaling invariance shows that the law of the process *B* does not depend on the choice of fixed time *T* The only remaining ambiguity is in the choice of the fixed time *T*.

Lemma 1The distribution ofBdefined by (2) does not depend on the choice of the timeT> 0.

*Proof:* Consider any other fixed positive time *T̃*, and use the construction above with *T̃*, *σ̃*, *B̃* in place of *T*, *σ*, *B* respectively. We need to show that *B̃* and *B* have the same distribution. Using the scaling factor *S* = *T̃*/*T*, then *X*′_{t} = *S*^{-1/2}*X*_{tS} is a standard Brownian motion. Also, *σ*′= *σ̃*/*S* is the last time before *T* at which *X*′ is zero. So,

has the same distribution as *B*. ⬜

This leads to the definition used here for Brownian meanders.

Definition 2A continuous process{B_{t}}_{t ∈ [0, 1]}is aBrownian meanderif and only it has the same distribution as (2) for a standard Brownian motionXand fixed timeT> 0.

In fact, there are various alternative — but equivalent — ways in which Brownian excursions can be defined and constructed.

- As a scaled segment of a Brownian motion before a time
*T*and after it last hits 0. This is definition 2. - As a Brownian motion conditioned on being positive. See theorem 4 below.
- As a segment of a Brownian excursion. See lemma 5.
- As the path of a standard Brownian motion starting from its minimum, in either the forwards or backwards direction. See theorem 6.
- As a Markov process with specified transition probabilities. See theorem 9 below.
- As a solution to an SDE. See theorem 12 below.

Recall that a standard Brownian motion run up until it last hits 0 before time *T* is a scaled Brownian bridge. By the definition above, the remaining path after it last hits zero is a Brownian meander. This gives a decomposition of the Brownian path into independent components.

Theorem 3LetXbe a standard Brownian motion,T> 0be a fixed time andσ<Tbe as defined by (1). Then, the following collections of random variables are all independent of each other,

The scaled path ofXafter timeσdefined by (2), which is a Brownian meander.The processσ^{-1/2}X_{tσ}over0 ≤t≤ 1, which is a Browian bridge.σ/T, which has the arcsine distribution.- sgn(
X_{T}), which has the Rademacher distribution.

*Proof:* Lemma 15 of the Brownian bridge post tells us that *σ*^{-1/2}*X*_{tσ} is a Brownian bridge independently of the remaining path 1_{[τ, T]}*X*, so is independent of the remaining random variables listed.

Next, the distribution of standard Brownian motion is unchanged under flipping the sign, *X* → –*X*, and this transformation does not affect the meander given by (2) or the times *σ* but flips the sign of *X*_{T},. So, sgn(*X*_{T}) must have the Rademacher distribution independently of the other random variables.

It just remains to show that *σ*/*T* has the arcsine distribution,

over *s* ≤ *T*, independently of the meander *B*. As the distribution was already computed in lemma 5 of the post on excursions, only independence remains.

Fixing time *s* < *T*, let

so that *τ* ≤ *t* precisely on the event that *σ* ≥ *s*. By the strong Markov property, *X*_{τ + t} is a Brownian motion independent of *σ* and, by scaling invariance, so is

conditioned on *σ* < *T*. However, the Brownian meander defined by (2) using *X̃* in place of *X* is equal to *B*. Hence, conditioned on *σ* ≥ *s*, *B* still has the distribution of a Brownian meander, showing that it is independent of *σ*. ⬜

Since the original Brownian motion sample path on interval [0, *T*] can be reconstructed from the components listed, theorem 3 provides an alternative construction of Brownian motion from independent parts.

As mentioned in the introduction, a meander can be constructed as a Brownian motion conditioned to be positive on the unit interval (0, 1]. As this involves conditioning on a zero probability event, we take a sequence of conditional probabilities for which the Brownian motion is restricted to positive sample paths in the limit. There a various ways in which this limit can be done, but I will use an approach similar to the one in theorem 9 of the Brownian excursion post. We only condition on *X* being positive after a small positive time *ϵ* which we allow to go to zero. The weak limit with respect to uniform convergence of sample paths is used, in the same way as in the Brownian excursion post.

Theorem 4Let{X_{t}}_{t ∈ [0, 1]}be a standard Brownian motion, andϵ> 0. Then, the distribution ofinfXconditional on_{t ∈ [ϵ, 1]}X_{t}> 0converges weakly to that of a Brownian meander asϵ→ 0.

*Proof:* Theorem 3 says that we can write

for a Brownian bridge *Y*, random time *σ*, Rademacher variable *U* and Brownian meander *B*. In particular, *X* converges uniformly to *B* as *σ* → 0. Furthermore, each of these terms is independent.

The event

is equivalent to *σ* < *ϵ* and *U* = 1. So, conditioned on *S*_{ϵ}, as *ϵ* goes to zero, the distribution of *X* converges weakly to that of *B* which, by independence with *S*_{ϵ}, is a Brownian meander. ⬜

Recall that the a Brownian excursion is the path of a Brownian motion over the interval on which it is nonzero about a positive time *T*, rescaled to the unit time interval. By the definition above, the section of the path just before *T* is a meander. It follows that a Brownian meander can be constructed as a scaled initial section of an excursion. However, truncating the excursion at a fixed time does not work. Instead, we should truncate at a random time which can be shown to be distributed as the square of a uniform random variable.

Lemma 5LetXbe a Brownian excursion and, independently,Ube a random variable uniformly distributed on the unit interval. ThenB_{t}=U^{-1}X_{tU2}is a Brownian meander.

*Proof:* Let *σ* < *T* be given by (1) and *τ* > *T* be the first time after *T* at which *X* hits zero. By the theorem 4 of the Brownian excursion post, *B*_{t} = (*τ* - *σ*)^{-1/2}|*X*_{σ + t(τ - σ)}| is a Brownian excursion independently of *σ*, *τ*. Then, by definition 2 above,

is a Brownian meander, where *U*^{2} = (*T* - *σ*)/(*τ* - *σ*) is independent of *B*. However, we know the distribution of *σ*, *τ* from lemma 5 of the Brownian excursion post:

Differentiating with respect to *s* and scaling so that it equals 1 when *t* = *T* gives conditional probabilities

For any continuously distributed real random variable *Y* with cumulative probability function *F*(*y*) = ℙ(*Y* > *y*), then *F*(*Y*) is uniform on the unit interval. Applying this to the cumulative distribution of *τ* conditioned on *σ* computed above shows that *U* is uniform. ⬜

#### Brownian motion near a minimum

As explained above, for a standard Brownian motion *X* defined over time interval *T*, the sample paths after they last hit zero define a Brownian meander. However, there is another method of extracting meanders from the sample paths. Instead of the final time at which it hits zero, consider the time *τ*^{∗} ∈ [0, *T*] at which *X* is equal to its pathwise minimum. By definition of the minimum, the section of the sample path started at this time will be bounded below by its initial value and, after subtracting *X*_{τ∗}, will be positive. The same is true for the segment of the path *before* *τ*, which we run in reverse time order. After rescaling, this gives a pair of independent meanders.

The idea is as in figure 3 below, where a Brownian motion in the top plot is decomposed into the pair of meanders in the bottom one.

This decomposition is similar in nature to the Vervaat transform from Brownian bridges to excursions. To be precise, I take *T* = 1 and the two meanders about the minimum of *X* at time *τ* are defined by

(3) |

over 0 ≤ *t* ≤ 1. The fact that these are independent meanders was shown by Denisov (1984) *A Random Walk and a Wiener Process Near a Maximum*. Whereas Denisov performs the decomposition about the *maximum* time, I look at the minimum time here. This is clearly the same thing due to the symmetry of Brownian motion distribution *X* under reflecting its value about 0. I only use the minimum to avoid reflecting through zero, making the plots in figure 3 a bit easier to visualize. The precise statement which we will prove is:

Theorem 6Let{X_{t}}_{t ∈ [0, 1]}be a standard Brownian motion. Then it almost surely has a unique minimum at a timeτ, which has the arcsine distribution, and the processesY,Zare both Brownian meanders. Furthermore,Y,Zandτare independent.

This result is actually quite intuitive. For any fixed time 0 < *τ* < 1, then the two processes *Y* and *Z* defined by (3) are independent standard Brownian motions. This still holds, if we choose *τ* randomly and independent of *X*. Then, conditioning on *τ* being the time at which *X* achieves its minimum is the same as conditioning on *Y* and *Z* being positive, making them into meanders. However, this idea involves conditioning on zero probability events. It can be made rigorous by taking limits of discrete-time approximations in much the same way as the proof of Vervaat’s transform in the post on excursions.

Instead, I will use a bit of continuous-time stochastic calculus theory to show that the construction of meanders given in figure 3 can be directly translated into the construction (2) used to define Brownian meanders. For a standard Brownian motion *X*, with running maximum *X*^{∗}_{t} = sup_{s ≤ t}*X*_{s}, then the difference *X*^{∗}_{t} – *X*_{t} is a nonnegative process which hits zero every time *X* reaches a new maximum. This is called the drawdown process of *X*, as it represents the amount that it has drawn down from its maximum so far. It is well known that this has the same joint distribution as |*X*_{t}|, which is a ‘reflecting Brownian motion’. This can be proved by directly showing that *X*^{∗} – *X* and |*X*| are both Markov, and computing their transition probabilities. Instead, I will make use of some stochastic integration theory which directly constructs a Brownian motion *W* for which *W*^{∗} – *W* = |*X*|.

As theorem 6 was stated in terms of the minimum instead of the maximum, switch the sign of *W* so that *W* – *W*^{m} = |*X*| where *W*^{m}_{t} = inf_{s ≤ t}*W*_{s} is the running minimum. Then, the time *τ* at which *W* achieves its minimum value is the same as the final time at which *X* hits zero, and the process *Y* defined by (3) (with *W* in place of *X*) is exactly the same as the meander *B* defined by (2). This shows that *Y* is a Brownian meander by construction, and *τ* has the arcsine distribution by theorem 3. The process *Z* is also a Brownian meander by the exact same argument applied to the reversed time Brownian motion *W*_{1 - t} – *W*_{1}. We still need to show independence, but theorem 6 is almost completed.

Lemma 7If|Xis a standard Brownian motion thenX_{t}|=W_{t}–W^{m}_{t}for standard Brownian motionWdefined by

(4)

*Proof:* Using theorem 11 of the post on local times, we have |*X*|= *L* + *W* for standard Brownian motion *W* and local time *L* = –*W*^{m} which, by lemma 9 of the same post, is given by

Plugging this into *W* = |*X*|-*L* and using linearity of expectations gives the result. ⬜

Proving theorem 6 by reducing equations (3) to the construction (2) of a Brownian meander is now almost a formality.

*Proof of theorem 6:* Starting with standard Brownian motion *X*, let *W* be the standard Brownian motion given by lemma 7. If *τ* is the final time at which *W* attains its minimum in the unit time interval, then this is the same as the last time at which *X* is equal to zero. So,

which, by construction, is a Brownian meander. Also, theorem 3 shows that this meander, the time *τ* and the process *τ*^{-1/2}*X*_{tτ} (over 0 ≤ *t* ≤ 1) are independent, and *τ* has the arcsine distribution. As equation (4) expresses *W* in terms of *X*, we see that *τ*^{-1/2}*W*_{tτ}, *τ* and the meander constructed above are independent. As *X* and *W* have the same joint distribution, these statements still hold if *W* is replaced by *X*.

So, we have shown that if *τ* is the last time at which *X* achieves its minimum then it has the arcsine distribution, the process *Y* defined by (3) is a Brownian meander, and *Y*, *Z*, *τ* are independent.

Reversing the time direction (replace *X*_{t} by *X*_{1 - t} – *X*_{1}), we see that if *τ*_{0} ≤ *τ* is the *first* time at which *X* hits its minimum over the unit time interval, then this also has the arcsine distribution, so is almost surely equal to *τ*. It also shows that the process *Z* defined by (3) is a Brownian meander. ⬜

#### The Brownian meander distribution

I now explicitly describe the Brownian meander distribution, by showing that it is Markov and computing its transition probabilities. We obtain exact expressions for the probability densities, although they are not as nice as obtained for Brownian bridges or excursions, and does not correspond to any standard distributions which I am aware of. We also do not find constructions of meander sample paths as simple transformations of other standard processes, such as Brownian bridges from Brownian motion and excursions from Bessel processes.

The idea is that, once a Brownian motion becomes positive, it has positive probability of it remaining positive over any given finite time interval. So, it can be directly conditioned on this event. As a meander is just Brownian motion conditioned on being positive, this gives the conditional expectations starting at any positive time.

For real *x* and *t* > 0, I use the notation

for the normal density of mean 0 and variance *t*. I will also write

(5) |

where erf is the error function. For positive *x* this is the probability of a normally distributed random variable of mean 0 and variance *t* lying between 0 and *x*. I will make use of the identity

for a a normal random variable *U* of mean *μ* and variance *ν*. To start, let us consider the distribution of a Brownian motion restricted to being positive over a time interval.

Lemma 8Let X be a Brownian motion and0 <t<Tbe fixed times. Then, conditioned on being positive over the interval[0,T]and on the value ofX_{0}> 0, the probability density ofX_{t}is

(6)

overx> 0,.

*Proof:* Throughout, I will implicitly condition on the value of *X*_{0} > 0, so this can be taken as a fixed positive constant. Letting *τ* be the first time at which *X* hits zero, then it is positive over the interval [0, *T*] if and only if *τ* > *T*. For any bounded measurable function *f*, the reflection principle gives

To see this, consider the case where *τ* ≤ *T*. Conditioned on this event, the strong Markov property says that *X*_{τ + s} is a standard Brownian motion with time index *s*, so has symmetric distribution under changing its sign. So, it is symmetric under replacing *X*_{T} by –*X*_{T}. As this also inverts the sign of the expectation on the right hand side above, it must be zero. Hence, the expectation on the right hand side is unchanged if restricted to the event *τ* > *T*, giving the left hand side.

As *X*_{T} is normal with mean *X*_{0} and variance *T*, using *f* = 1 gives

On the other hand, conditioned on *X*_{t}, *X*_{T} is normal with mean *X*_{t} and variance *T* – *t* giving

As ±*X*_{t} has probability density *φ*_{t}(*x*∓*X*_{0}), this shows that restricted to the event *τ* > *T*, *X*_{t} has density

over *x* > 0. To condition on *τ* > *T*, we divide through by its probability Φ_{T}(*X*_{0}) to obtain the first expression for *p*_{t, T, X0}(*x*). ⬜

Equation (6) for the transition probabilities can be rearranged as

(7) |

which can be further simplified to

(8) |

for a ‘normalizing constant’ *c* chosen to make this integrate to 1 or, explicitly,

Equation (8) can be interpreted as a product of the normalizing constant, the probability density of *X*_{t}, the conditional probability of *X* being positive before *t* and the conditional probability of it being positive between *t* and *T*.

Lemma 8 directly gives the Brownian meander transition probabilities.

Theorem 9A continuous process{X_{t}}_{0 ≤ t ≤ 1}is a Brownian meander if and only if it starts from zero and is strictly positive at positive times, and is Markov such that for times0 <s<t< 1, the distribution ofX_{t}conditioned onX_{s}has densityp_{t - s, 1 - s, Xs}defined by (6).

*Proof:* Starting with standard Brownian motion *X* with natural filtration ℱ_{·}, conditioned on being positive on time interval [*ϵ*, 1], theorem 4 says that this converges weakly to the law of a Brownian excursion in the limit as *ϵ* goes to 0.

However, so long as 0 < *ϵ* < *s* < *t* < 1, then conditioned on *F*_{s}, *X*_{s + u} is a Brownian motion with time index *u* conditioned to be positive on the inteval [0, 1 - *s*]. Lemma 8 then says that *X*_{t} has probability density *p*_{t - s, 1 - s, Xs} conditioned on ℱ_{s}. As this holds for all *ϵ* < *s* and is continuous in *X*_{s}, it remains true in the limit.

For the converse, it needs to be shown that any positive Markov process with the given transition probabilities and starting from zero is a meander. However, the law of a Markov process is uniquely determined by its initial distribution and transition probabilities. We almost have all of this, but are just missing the transition probabilities for 0 = *s* < *t*. These can be computed by taking limit as *s* and *X*_{s} tend to zero, which is the same as computing the distribution of *X*_{t}. I do this in Lemma 10 below. ⬜

Since a meander starts from zero, we can take the limit of the transition probabilities as *s* goes to zero and *X*_{s} tends to zero. This gives the probability density of *X*_{t}.

Lemma 10IfXis a Brownian meander thenX_{t}has probability density

overx> 0, for all0 <t< 1.

*Proof:* For time 0 < *s* < *t*, theorem 9 says that the probability density of *X*_{t} conditional on the value of *X*_{s} is *p*_{t - s, 1 - s, Xs}(*x*). We just need to take the limit of this as *s* goes to zero and, by continuity, *X*_{s} tends to zero. This is just taking the limit *X*_{0} → 0 in (7) with *T* = 1. Noting that sinh(*xX*_{0}/*t*) and Φ_{1}(*X*_{0}) both go to zero in this limit, use first order approximations,

their ratio tends to √2π*x*/*t*. Using this in (7) gives the claimed probability density. ⬜

The law of *X*_{t}, as described by lemma 10, is not a standard distribution as far as I am aware. At the final time *t* = 1, however, we obtain the Rayleigh distribution. This is a *χ*_{2} or, equivalently, a square root of the *χ*^{2}_{2} distribution, and also the square root of an exponentially distributed random variable of parameter 1/2. It has probability density *xe*^{–x2/2}

Lemma 11IfXis a Brownian meander thenX_{1}has the Rayleigh distribution.

*Proof:* The probability density of *X*_{1} can be computed by taking the *t* → 1 limit in the density described in lemma 10. As the limit of Φ_{1 - t}(*x*) is 1/2, we obtain the Rayleigh distribution as claimed. ⬜

#### The meander SDE

Finally, I look at representing Brownian meanders via a stochastic differential equation (SDE). We should expect this to have a positive drift term which blows up to infinity if the process approaches zero in order to push it away. This is what we saw for the Brownian excursion SDE. In fact, we show that the meander satisfies

(9) |

for a Brownian motion *W*. The function Φ_{T - t}(*x*), which is defined by (5), vanishes as *x* goes to zero, so that the drift tends to infinity.

There are several ways in which we can try to prove that SDE (9) is satisfied. First, using the transition probabilities, we could directly show that the difference between *X* and the drift term is a martingale. This is the approach I used for the Brownian bridge SDE, and would also work here. Second, we could try and express *X* as a transform of a process whose SDE is already known, such as was done for Brownian excursions although, here, there is no clear way to transform *X*. The method I will use is Girsanov transforms. By writing *X* as a Brownian motion under a change of measure, its drift can be computed.

Theorem 12A continuous process{X_{t}}_{t ∈ [0, 1]}is a Brownian meander if and only ifX_{0}= 0(almost surely) and is strictly positive and solves the SDE (9) over0 <t≤ 1for a standard Brownian motion{W_{t}}_{t ∈ [0, 1]}.

*Proof:* Fixing *ϵ* > 0, it is sufficient to show that the SDE (9) holds over *ϵ* ≤ *t* ≤ 1. As previously noted in the proof of theorem 9, on this time range, *X* has the distribution of a Brownian motion conditioned on being positive. That is, we can suppose that *X*_{t} is a Brownian motion over *t* ≥ *ϵ* and, letting *τ* be the first time after *ϵ* at which it hits zero, we condition on the event {*τ* > 1}. This conditioning is a continuous change of measure to the new probability distribution ℚ with Radon-Nikodym derivative

I use ∼ to denote that the two sides are equal up to a constant scaling factor (i.e., a normalizing constant to make the total probability equal to 1). As described in the proof of lemma 8, this measure change has conditional expectations

for times *ϵ* ≤ *t* ≤ 1. Over *t* < *τ* this is just Φ_{1 - t}(*X*_{t}) and, as Φ_{1 - t}(*x*) has derivative *φ*_{1 - t}(*x*) we compute the quadratic covariation,

Theorem 11 of the Girsanov transforms post tells us that, under measure ℚ and for times *τ* ∧ 1 > *t* ≥ *ϵ*, *X* is a sum of a Brownian motion *W* and the drift term above. Also, we have *τ* > 1 almost surely under the measure ℚ, so the result holds over *ϵ* ≤ *t* < 1.

Letting *ϵ* go to zero, this almost completes the proof. There is one slight issue to remaining to address; theorem 11 quoted above is stated for *equivalent* changes of measure whereas, here, the measure change is only continuous. This is because *τ* ≤ 1 with positive ℙ-probability, but has zero ℚ-probability. To fix this, choose any *δ* > 0 and let *σ* be the first time after *ϵ* at which *X* ≤ *δ*. On the filtration ℱ_{σ}, the Radon-Nikodym derivative is Φ_{1 - σ}(*X*_{σ}) when *σ* < 1 and is 1 when *σ* ≥ 1. As this is strictly positive, the measure change is equivalent on ℱ_{σ} and, hence, the argument above applies to *X* up until time *σ*. Letting *δ* go to zero, we have *σ* ≥ 1 eventually, giving the result. ⬜