Predictable Stopping Times

Although this post is under the heading of `the general theory of semimartingales’ it is not, strictly speaking, about semimartingales at all. Instead, I will be concerned with a characterization of predictable stopping times. The reason for including this now is twofold. First, the results are too advanced to have been proven in the earlier post on predictable stopping times, and reasonably efficient self-contained proofs can only be given now that we have already built up a certain amount of stochastic calculus theory. Secondly, the results stated here are indispensable to the further study of semimartingales. In particular, standard semimartingale decompositions require some knowledge of predictable processes and predictable stopping times.

Recall that a stopping time {\tau} is said to be predictable if there exists a sequence of stopping times {\tau_n\le\tau} increasing to {\tau} and such that {\tau_n < \tau} whenever {\tau > 0}. Also, the predictable sigma-algebra {\mathcal{P}} is defined as the sigma-algebra generated by the left-continuous and adapted processes. Stated like this, these two concepts can appear quite different. However, as was previously shown, stochastic intervals of the form {[\tau,\infty)} for predictable times {\tau} are all in {\mathcal{P}} and, in fact, generate the predictable sigma-algebra.

The main result (Theorem 1) of this post is to show that a converse statement holds, so that {[\tau,\infty)} is in {\mathcal{P}} if and only if the stopping time {\tau} is predictable. This rather simple sounding result does have many far-reaching consequences. We can use it show that all cadlag predictable processes are locally bounded, local martingales are predictable if and only if they are continuous, and also give a characterization of cadlag predictable processes in terms of their jumps. Some very strong statements about stopping times also follow without much difficulty for certain special stochastic processes. For example, if the underlying filtration is generated by a Brownian motion then every stopping time is predictable. Actually, this is true whenever the filtration is generated by a continuous Feller process. It is also possible to give a surprisingly simple characterization of stopping times for filtrations generated by arbitrary non-continuous Feller processes. Precisely, a stopping time {\tau} is predictable if the process is almost surely continuous at time {\tau} and is totally inaccessible if the underlying Feller process is almost surely discontinuous at {\tau}.

As usual, we work with respect to a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\in{\mathbb R}_+},{\mathbb P})}. I now give a statement and proof of the main result of this post. Note that the equivalence of the four conditions below means that any of them can be used as alternative definitions of predictable stopping times. Often, the first condition below is used instead. Stopping times satisfying the definition used in these notes are sometimes called announceable, with the sequence {\tau_n\uparrow\tau} said to announce {\tau} (this terminology is used by, e.g., Rogers & Williams). Stopping times satisfying property 3 below, which is easily seen to be equivalent to 2, are sometimes called fair. Then, the following theorem says that the sets of predictable, fair and announceable stopping times all coincide.

Theorem 1 Let {\tau} be a stopping time. Then, the following are equivalent.

  1. {[\tau]\in\mathcal{P}}.
  2. {\Delta M_\tau1_{[\tau,\infty)}} is a local martingale for all local martingales M.
  3. {{\mathbb E}[1_{\{\tau < \infty\}}\Delta M_\tau]=0} for all cadlag bounded martingales M.
  4. {\tau} is predictable.

Continue reading “Predictable Stopping Times”

Local Martingales

Recall from the previous post that a cadlag adapted process {X} is a local martingale if there is a sequence {\tau_n} of stopping times increasing to infinity such that the stopped processes {1_{\{\tau_n>0\}}X^{\tau_n}} are martingales. Local submartingales and local supermartingales are defined similarly.

An example of a local martingale which is not a martingale is given by the `double-loss’ gambling strategy. Interestingly, in 18th century France, such strategies were known as martingales and is the origin of the mathematical term. Suppose that a gambler is betting sums of money, with even odds, on a simple win/lose game. For example, betting that a coin toss comes up heads. He could bet one dollar on the first toss and, if he loses, double his stake to two dollars for the second toss. If he loses again, then he is down three dollars and doubles the stake again to four dollars. If he keeps on doubling the stake after each loss in this way, then he is always gambling one more dollar than the total losses so far. He only needs to continue in this way until the coin eventually does come up heads, and he walks away with net winnings of one dollar. This therefore describes a fair game where, eventually, the gambler is guaranteed to win.

Of course, this is not an effective strategy in practise. The losses grow exponentially and, if he doesn’t win quickly, the gambler must hit his credit limit in which case he loses everything. All that the strategy achieves is to trade a large probability of winning a dollar against a small chance of losing everything. It does, however, give a simple example of a local martingale which is not a martingale.

The gamblers winnings can be defined by a stochastic process {\{Z_n\}_{n=1,\ldots}} representing his net gain (or loss) just before the n’th toss. Let {\epsilon_1,\epsilon_2,\ldots} be a sequence of independent random variables with {{\mathbb P}(\epsilon_n=1)={\mathbb P}(\epsilon_n=-1)=1/2}. Here, {\epsilon_n} represents the outcome of the n’th toss, with 1 referring to a head and -1 referring to a tail. Set {Z_1=0} and

\displaystyle  Z_{n}=\begin{cases} 1,&\text{if }Z_{n-1}=1,\\ Z_{n-1}+\epsilon_n(1-Z_{n-1}),&\text{otherwise}. \end{cases}

This is a martingale with respect to its natural filtration, starting at zero and, eventually, ending up equal to one. It can be converted into a local martingale by speeding up the time scale to fit infinitely many tosses into a unit time interval

\displaystyle  X_t=\begin{cases} Z_n,&\text{if }1-1/n\le t<1-1/(n+1),\\ 1,&\text{if }t\ge 1. \end{cases}

This is a martingale with respect to its natural filtration on the time interval {[0,1)}. Letting {\tau_n=\inf\{t\colon\vert X_t\vert\ge n\}} then the optional stopping theorem shows that {X^{\tau_n}_t} is a uniformly bounded martingale on {t<1}, continuous at {t=1}, and constant on {t\ge 1}. This is therefore a martingale, showing that {X} is a local martingale. However, {{\mathbb E}[X_1]=1\not={\mathbb E}[X_0]=0}, so it is not a martingale. Continue reading “Local Martingales”

Localization

Special classes of processes, such as martingales, are very important to the study of stochastic calculus. In many cases, however, processes under consideration `almost’ satisfy the martingale property, but are not actually martingales. This occurs, for example, when taking limits or stochastic integrals with respect to martingales. It is necessary to generalize the martingale concept to that of local martingales. More generally, localization is a method of extending a given property to a larger class of processes. In this post I mention a few definitions and simple results concerning localization, and look more closely at local martingales in the next post.

Definition 1 Let P be a class of stochastic processes. Then, a process X is locally in P if there exists a sequence of stopping times {\tau_n\uparrow\infty} such that the stopped processes

\displaystyle  1_{\{\tau_n>0\}}X^{\tau_n}

are in P. The sequence {\tau_n} is called a localizing sequence for X (w.r.t. P).

I write {P_{\rm loc}} for the processes locally in P. Choosing the sequence {\tau_n\equiv\infty} of stopping times shows that {P\subseteq P_{\rm loc}}. A class of processes is said to be stable if {1_{\{\tau>0\}}X^\tau} is in P whenever X is, for all stopping times {\tau}. For example, the optional stopping theorem shows that the classes of cadlag martingales, cadlag submartingales and cadlag supermartingales are all stable.

Definition 2 A process is a

  1. a local martingale if it is locally in the class of cadlag martingales.
  2. a local submartingale if it is locally in the class of cadlag submartingales.
  3. a local supermartingale if it is locally in the class of cadlag supermartingales.

Continue reading “Localization”

Class (D) Processes

A stochastic process X is said to be uniformly integrable if the set of random variables {\{X_t\colon t\in{\mathbb R}_+\}} is uniformly integrable. However, even if this is the case, it does not follow that the set of values of the process sampled at arbitrary stopping times is uniformly integrable.

For the case of a cadlag martingale X, optional sampling can be used. If {t\ge 0} is any fixed time then this says that {X_\tau={\mathbb E}[X_t\mid\mathcal{F}_\tau]} for stopping times {\tau\le t}. As sets of conditional expectations of a random variable are uniformly integrable, the following result holds.

Lemma 1 Let X be a cadlag martingale. Then, for each {t\ge 0}, the set

\displaystyle  \{X_\tau\colon\tau\le t\text{\ is\ a\ stopping\ time}\}

is uniformly integrable.

This suggests the following generalized concepts of uniform integrability for stochastic processes.

Definition 2 Let X be a jointly measurable stochastic process. Then, it is

  • of class (D) if {\{X_\tau\colon\tau<\infty\text{ is a stopping time}\}} is uniformly integrable.
  • of class (DL) if, for each {t\ge 0}, {\{X_\tau\colon\tau\le t\text{ is a stopping time}\}} is uniformly integrable.

Continue reading “Class (D) Processes”

Optional Sampling

Doob’s optional sampling theorem states that the properties of martingales, submartingales and supermartingales generalize to stopping times. For simple stopping times, which take only finitely many values in {{\mathbb R}_+}, the argument is a relatively basic application of elementary integrals. For simple stopping times {\sigma\le\tau}, the stochastic interval {(\sigma,\tau]} and its indicator function {1_{(\sigma,\tau]}} are elementary predictable. For any submartingale {X}, the properties of elementary integrals give the inequality

\displaystyle  {\mathbb E}\left[X_\tau-X_\sigma\right]={\mathbb E}\left[\int_0^\infty 1_{(\sigma,\tau]}\,dX\right]\ge 0. (1)

For a set {A\in \mathcal{F}_\sigma} the following

\displaystyle  \sigma^\prime(\omega)=\begin{cases} \sigma(\omega),&\textrm{if }\omega\in A,\\ \tau(\omega),&\textrm{otherwise}, \end{cases}

is easily seen to be a stopping time. Replacing {\sigma} by {\sigma^\prime} extends inequality (1) to the following,

\displaystyle  {\mathbb E}\left[1_A(X_\tau-X_\sigma)\right]={\mathbb E}\left[X_\tau-X_{\sigma^\prime}\right]\ge 0. (2)

As this inequality holds for all sets {A\in\mathcal{F}_\sigma} it implies the extension of the submartingale property {X_\sigma\le{\mathbb E}[X_\tau\vert\mathcal{F}_\sigma]} to the random times. This argument applies to all simple stopping times, and is sufficient to prove the optional sampling result for discrete time submartingales. In continuous time, the additional hypothesis that the process is right-continuous is required. Then, the result follows by taking limits of simple stopping times.

Theorem 1 Let {\sigma\le\tau} be bounded stopping times. For any cadlag martingale, submartingale or supermartingale {X}, the random variables {X_\sigma, X_\tau} are integrable and the following are satisfied.

  1. If {X} is a martingale then, {X_\sigma={\mathbb E}\left[X_{\tau}\vert\mathcal{F}_\sigma\right].}
  2. If {X} is a submartingale then, {X_\sigma\le{\mathbb E}\left[X_{\tau}\vert\mathcal{F}_\sigma\right].}
  3. If {X} is a supermartingale then, {X_\sigma\ge{\mathbb E}\left[X_{\tau}\vert\mathcal{F}_\sigma\right].}

Continue reading “Optional Sampling”

Predictable Stopping Times

The concept of a stopping times was introduced a couple of posts back. Roughly speaking, these are times for which it is possible to observe when they occur. Often, however, it is useful to distinguish between different types of stopping times. A random time for which it is possible to predict when it is about to occur is called a predictable stopping time. As always, we work with respect to a filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}.

Definition 1 A map {\tau\colon\Omega\rightarrow\bar{\mathbb R}_+} is a predictable stopping time if there exists a sequence of stopping times {\tau_n\uparrow\tau} satisfying {\tau_n<\tau} whenever {\tau\not=0}.

Predictable stopping times are alternatively referred to as previsible. The sequence of times {\tau_n} in this definition are said to announce {\tau}. Note that, in this definition, the random time was not explicitly required to be a stopping time. However, this is automatically the case, as the following equation shows.

\displaystyle  \left\{\tau\le t\right\}=\bigcap_n\left\{\tau_n\le t\right\}\in\mathcal{F}_t.

One way in which predictable stopping times occur is as hitting times of a continuous adapted process. It is easy to predict when such a process is about to hit any level, because it must continuously approach that value.

Theorem 2 Let {X} be a continuous adapted process and {K} be a real number. Then

\displaystyle  \tau=\inf\left\{t\in{\mathbb R}_+\colon X_t\ge K\right\}

is a predictable stopping time.

Proof: Let {\tau_n} be the first time at which {X_t\ge K-1/n} which, by the debut theorem, is a stopping time. This gives an increasing sequence bounded above by {\tau}. Also, {X_{\tau_n}\ge K-1/n} whenever {\tau_n<\infty} and, by left-continuity, setting {\sigma=\lim_n\tau_n} gives {X_\sigma\ge K} whenever {\sigma<\infty}. So, {\sigma\ge\tau}, showing that the sequence {\tau_n} increases to {\tau}. If {0<\tau_n\le\tau<\infty} then, by continuity, {X_{\tau_n}=K-1/n\not=K=X_{\tau}}. So, {\tau_n<\tau} whenever {0<\tau<\infty} and the sequence {n\wedge\tau_n} announces {\tau}. ⬜

In fact, predictable stopping times are always hitting times of continuous processes, as stated by the following result. Furthermore, by the second condition below, it is enough to prove the much weaker condition that a random time can be announced `in probability’ to conclude that it is a predictable stopping time.

Lemma 3 Suppose that the filtration is complete and {\tau\colon\Omega\rightarrow\bar{\mathbb R}_+} is a random time. The following are equivalent.

  1. {\tau} is a predictable stopping time.
  2. For any {\epsilon,\delta,K>0} there is a stopping time {\sigma} satisfying
    \displaystyle  {\mathbb P}\left(K\wedge\tau-\epsilon<\sigma<\tau{\rm\ or\ }\sigma=\tau=0\right)>1-\delta. (1)
  3. {\tau=\inf\{t\ge 0\colon X_t=0\}} for some continuous adapted process {X}.

Continue reading “Predictable Stopping Times”

Sigma Algebras at a Stopping Time

The previous post introduced the notion of a stopping time {\tau}. A stochastic process {X} can be sampled at such random times and, if the process is jointly measurable, {X_\tau} will be a measurable random variable. It is usual to study adapted processes, where {X_t} is measurable with respect to the sigma-algebra {\mathcal{F}_t} at that time. Then, it is natural to extend the notion of adapted processes to random times and ask the following. What is the sigma-algebra of observable events at the random time {\tau}, and is {X_\tau} measurable with respect to this? The idea is that if a set {A} is observable at time {\tau} then for any time {t}, its restriction to the set {\{\tau\le t\}} should be in {\mathcal{F}_t}. As always, we work with respect to a filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}. The sigma-algebra at the stopping time {\tau} is then,

\displaystyle  \mathcal{F}_\tau=\left\{A\in\mathcal{F}_\infty\colon A\cap\{\tau\le t\}\in\mathcal{F}_t{\rm\ for\ all\ }t\ge 0\right\}.

The restriction to sets in {\mathcal{F}_\infty} is to take account of the possibility that the stopping time can be infinite, and it ensures that {A=A\cap\{\tau\le\infty\}\in\mathcal{F}_\infty}. From this definition, a random variable {U} us {\mathcal{F}_\tau}-measurable if and only if {1_{\{\tau\le t\}}U} is {\mathcal{F}_t}-measurable for all times {t\in{\mathbb R}_+\cup\{\infty\}}.

Similarly, we can ask what is the set of events observable strictly before the stopping time. For any time {t}, then this sigma-algebra should include {\mathcal{F}_t} restricted to the event {\{t<\tau\}}. This suggests the following definition,

\displaystyle  \mathcal{F}_{\tau-}=\sigma\left(\left\{ A\cap\{t<\tau\}\colon t\ge 0,A\in\mathcal{F}_t \right\}\cup\mathcal{F}_0\right).

The notation {\sigma(\cdot)} denotes the sigma-algebra generated by a collection of sets, and in this definition the collection of elements of {\mathcal{F}_0} are included in the sigma-algebra so that we are consistent with the convention {\mathcal{F}_{0-}=\mathcal{F}_0} used in these notes.

With these definitions, the question of whether or not a process {X} is {\mathcal{F}_\tau}-measurable at a stopping time {\tau} can be answered. There is one minor issue here though; stopping times can be infinite whereas stochastic processes in these notes are defined on the time index set {{\mathbb R}_+}. We could just restrict to the set {\{\tau<\infty\}}, but it is handy to allow the processes to take values at infinity. So, for the moment we consider a processes {X_t} where the time index {t} runs over {\bar{\mathbb R}_+\equiv{\mathbb R}_+\cup\{\infty\}}, and say that {X} is a predictable, optional or progressive process if it satisfies the respective property restricted to times in {{\mathbb R}_+} and {X_\infty} is {\mathcal{F}_\infty}-measurable.

Lemma 1 Let {X} be a stochastic process and {\tau} be a stopping time.

  • If {X} is progressively measurable then {X_\tau} is {\mathcal{F}_\tau}-measurable.
  • If {X} is predictable then {X_\tau} is {\mathcal{F}_{\tau-}}-measurable.

Continue reading “Sigma Algebras at a Stopping Time”

Stopping Times and the Debut Theorem

In the previous two posts of the stochastic calculus notes, I began by introducing the basic concepts of a stochastic process and filtrations. As we often observe stochastic processes at a random time, a further definition is required. A stopping time is a random time which is adapted to the underlying filtration. As discussed in the previous post, we are working with respect to a filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}.

Definition 1 A stopping time is a map {\tau\colon\Omega\rightarrow{\mathbb R}_+\cup\{\infty\}} such that {\{\tau\le t\}\in\mathcal{F}_t} for each {t\ge 0}.

This definition is equivalent to stating that the process {1_{[0,\tau]}} is adapted. Equivalently, at any time {t}, the event {\{\tau\le t\}} that the stopping time has already occurred is observable.

One common way in which stopping times appear is as the first time at which an adapted stochastic process hits some value. The debut theorem states that this does indeed give a stopping time.

Theorem 2 (Debut theorem) Let {X} be an adapted right-continuous stochastic process defined on a complete filtered probability space. If {K} is any real number then {\tau\colon\Omega\rightarrow{\mathbb R}_+\cup\{\infty\}} defined by

\displaystyle  \tau(\omega)=\inf\left\{t\in{\mathbb R}_+\colon X_t(\omega)\ge K\right\} (1)

is a stopping time.

Continue reading “Stopping Times and the Debut Theorem”