Having covered the basics of continuous-time processes and filtrations in the previous posts, I now move on to stochastic integration. In standard calculus and ordinary differential equations, a central object of study is the derivative of a function . This does, however, require restricting attention to differentiable functions. By integrating, it is possible to generalize to bounded variation functions. If is such a function and is continuous, then the Riemann-Stieltjes integral is well defined. The Lebesgue-Stieltjes integral further generalizes this to measurable integrands.
However, the kinds of processes studied in stochastic calculus are much less well behaved. For example, with probability one, the sample paths of standard Brownian motion are nowhere differentiable. Furthermore, they have infinite variation over bounded time intervals. Consequently, if is such a process, then the integral is not defined using standard methods.
Stochastic integration with respect to standard Brownian motion was developed by Kiyoshi Ito. This required restricting the class of possible integrands to be adapted processes, and the integral can then be constructed using the Ito isometry. This method was later extended to more general square integrable martingales and, then, to the class of semimartingales. It can then be shown that, as with Lebesgue integration, a version of the bounded and dominated convergence theorems are satisfied.
In these notes, a more direct approach is taken. The idea is that we simply define the stochastic integral such that the required elementary properties are satisfied. That is, it should agree with the explicit expressions for certain simple integrands, and should satisfy the bounded and dominated convergence theorems. Much of the theory of stochastic calculus follows directly from these properties, and detailed constructions of the integral are not required for many practical applications. Before moving on to the definition, note that, whereas the value of a standard Lebesgue integral is just a real number, stochastic integrals take values in the space of random variables. It is therefore possible to weaken some of the properties required of such integrals. First, any identity is only required to be satisfied almost surely. That is, on a set of probability one. Second, the notion of convergence of a sequence of real numbers can be replaced by the much weaker idea of convergence in probability.
We work with respect to a complete filtered probability space . Then, the space of random variables is denoted by , or simply . This is the space of measurable functions or, more precisely, the equivalence classes of such functions up to equality on a set of probability one.
Recall that an elementary predictable process is of the form
for , times , -measurable random variable and -measurable random variables . The stochastic integral of this with respect to a process is
Integration is a linear function of the integrand, so that
for real numbers and predictable processes .
Also, the stochastic integral should satisfy bounded convergence in probability. That is, if is a sequence of predictable processes converging to a limit , and is uniformly bounded for some constant , then the integrals converge,
These properties are enough to define stochastic integration for bounded and predictable integrands. The notation is used to denote the bounded predictable processes.
Definition 1 Let be a process. The stochastic integral up to time with respect to , if it exists, is a map
Proving the existence of the stochastic integral for an arbitrary integrator is, in general, quite a difficult problem. However, uniqueness is a simple consequence of the monotone class theorem. Also, note that the requirement that the integral is a linear function of the integrand was not mentioned in the definition above. However, this property is again a simple consequence of the monotone class theorem.
Proof: Suppose that there were two versions of the integral, both satisfying the required properties. Denoting them by and respectively, let be the set of all bounded predictable processes satisfying . From the definition above, this includes all bounded elementary integrands and is closed under bounded convergence. However, the elementary predictable processes generate the predictable sigma-algebra. So, by the monotone class theorem, contains all bounded predictable processes, and .
Linearity follows in a similar way. For elementary integrands, it follows from equation (3). More generally, fix real and bounded elementary . Then, let consist of the set of bounded predictable processes such that (2) is satisfied. Again, this includes the elementary processes and is closed under bounded convergence. So, (2) is satisfied for all elementary and bounded predictable .
Finally, fix a bounded predictable process and let be the set of all bounded predictable processes such that (2) is satisfied. As proven above, this contains all elementary processes. Also, it is closed under bounded convergence, so (2) is satisfied for all . ⬜
Any process with respect to which the stochastic integral is well defined must necessarily satisfy certain basic properties.
Lemma 3 Let be an adapted stochastic process such that, for each , the stochastic integral given by Definition 1 exists. Then,
- is right-continuous in probability.
- The set
is bounded in probability, for each .
- has a cadlag verson.
Proof: If is a sequence of times, then bounded convergence gives
in probability as . So, is right-continuous in probability.
Next, we can show that the set of integrals for predictable integrands is bounded in probability, for each fixed time . In particular, by restricting to elementary integrands, this will imply that (4) is bounded in probability.
Arguing by contradiction, suppose that this is not the case. By definition, this means that there is an and a sequence of predictable processes such that for all n. However, this contradicts bounded convergence in probability,
as . Hence, the set given by (4) must indeed be bounded in probability.
Finally, using a result from an earlier post, the existence of cadlag versions follows from the first two properties and the condition that the process is adapted. ⬜
The first two conditions above are not only necessary for the existence of the stochastic integral, they are also sufficient. That fact is not needed here though, and the existence of the integral given these conditions will be shown in a later post. Adapted processes with respect to which stochastic integration is well defined are known as semimartingales. By the result above, there is no loss of generality in only considering cadlag processes.
Definition 4 A semimartingale is a cadlag adapted process such that, for each , the stochastic integral given by Definition 1 exists.
Simple examples of semimartingales include the cadlag adapted processes of finite variation over all bounded time intervals. Then, the stochastic integral of Definition 1 coincides with the Lebesgue-Stieltjes integral. More interesting examples include Brownian motion and, as we shall see later, all local martingales.
As mentioned above, the stochastic integral was originally constructed with respect to Brownian motion and, then, similar techniques were applied to arbitrary martingales. This led, historically, to the definition of a semimartingale as a process which can be decomposed into a sum of a finite variation process and a local martingale. That such processes are in fact equivalent to the definition above is a consequence of the Bichteler-Dellacherie theorem which will be covered later in these notes.