# SDEs Under Changes of Time and Measure

The previous two posts described the behaviour of standard Brownian motion under stochastic changes of time and equivalent changes of measure. I now demonstrate some applications of these ideas to the study of stochastic differential equations (SDEs). Surprisingly strong results can be obtained and, in many cases, it is possible to prove existence and uniqueness of solutions to SDEs without imposing any continuity constraints on the coefficients. This is in contrast to most standard existence and uniqueness results for both ordinary and stochastic differential equations, where conditions such as Lipschitz continuity is required. For example, consider the following SDE for measurable coefficients ${a,b\colon{\mathbb R}\rightarrow{\mathbb R}}$ and a Brownian motion B

 $\displaystyle dX_t=a(X_t)\,dB_t+b(X_t)\,dt.$ (1)

If a is nonzero, ${a^{-2}}$ is locally integrable and b/a is bounded then we can show that this has weak solutions satisfying uniqueness in law for any specified initial distribution of X. The idea is to start with X being a standard Brownian motion and apply a change of time to obtain a solution to (1) in the case where the drift term b is zero. Then, a Girsanov transformation can be used to change to a measure under which X satisfies the SDE for nonzero drift b. As these steps are invertible, every solution can be obtained from a Brownian motion in this way, which uniquely determines the distribution of X.

A standard example demonstrating the concept of weak solutions and uniqueness in law is provided by Tanaka’s SDE

 $\displaystyle dX_t={\rm sgn}(X_t)\,dB_t$ (2)

# Girsanov Transformations

Girsanov transformations describe how Brownian motion and, more generally, local martingales behave under changes of the underlying probability measure. Let us start with a much simpler identity applying to normal random variables. Suppose that X and ${Y=(Y^1,\ldots,Y^n)}$ are jointly normal random variables defined on a probability space ${(\Omega,\mathcal{F},{\mathbb P})}$. Then ${U\equiv\exp(X-\frac{1}{2}{\rm Var}(X)-{\mathbb E}[X])}$ is a positive random variable with expectation 1, and a new measure ${{\mathbb Q}=U\cdot{\mathbb P}}$ can be defined by ${{\mathbb Q}(A)={\mathbb E}[1_AU]}$ for all sets ${A\in\mathcal{F}}$. Writing ${{\mathbb E}_{\mathbb Q}}$ for expectation under the new measure, then ${{\mathbb E}_{\mathbb Q}[Z]={\mathbb E}[UZ]}$ for all bounded random variables Z. The expectation of a bounded measurable function ${f\colon{\mathbb R}^n\rightarrow{\mathbb R}}$ of Y under the new measure is

 $\displaystyle {\mathbb E}_{\mathbb Q}\left[f(Y)\right]={\mathbb E}\left[f\left(Y+{\rm Cov}(X,Y)\right)\right],$ (1)

where ${{\rm Cov}(X,Y)}$ is the covariance. This is a vector whose i’th component is the covariance ${{\rm Cov}(X,Y^i)}$. So, Y has the same distribution under ${{\mathbb Q}}$ as ${Y+{\rm Cov}(X,Y)}$ has under ${{\mathbb P}}$. That is, when changing to the new measure, Y remains jointly normal with the same covariance matrix, but its mean increases by ${{\rm Cov}(X,Y)}$. Equation (1) follows from a straightforward calculation of the characteristic function of Y with respect to both ${{\mathbb P}}$ and ${{\mathbb Q}}$.

Now consider a standard Brownian motion B and fix a time ${T>0}$ and a constant ${\mu}$. Then, for all times ${t\ge 0}$, the covariance of ${B_t}$ and ${B_T}$ is ${{\rm Cov}(B_t,B_T)=t\wedge T}$. Applying (1) to the measure ${{\mathbb Q}=\exp(\mu B_T-\mu^2T/2)\cdot{\mathbb P}}$ shows that

$\displaystyle B_t=\tilde B_t + \mu (t\wedge T)$

where ${\tilde B}$ is a standard Brownian motion under ${{\mathbb Q}}$. Under the new measure, B has gained a constant drift of ${\mu}$ over the interval ${[0,T]}$. Such transformations are widely applied in finance. For example, in the Black-Scholes model of option pricing it is common to work under a risk-neutral measure, which transforms the drift of a financial asset to be the risk-free rate of return. Girsanov transformations extend this idea to much more general changes of measure, and to arbitrary local martingales. However, as shown below, the strongest results are obtained for Brownian motion which, under a change of measure, just gains a stochastic drift term. Continue reading “Girsanov Transformations”

# Time-Changed Brownian Motion

From the definition of standard Brownian motion B, given any positive constant c, ${B_{ct}-B_{cs}}$ will be normal with mean zero and variance c(ts) for times ${t>s\ge 0}$. So, scaling the time axis of Brownian motion B to get the new process ${B_{ct}}$ just results in another Brownian motion scaled by the factor ${\sqrt{c}}$.

This idea is easily generalized. Consider a measurable function ${\xi\colon{\mathbb R}_+\rightarrow{\mathbb R}_+}$ and Brownian motion B on the filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}$. So, ${\xi}$ is a deterministic process, not depending on the underlying probability space ${\Omega}$. If ${\theta(t)\equiv\int_0^t\xi^2_s\,ds}$ is finite for each ${t>0}$ then the stochastic integral ${X=\int\xi\,dB}$ exists. Furthermore, X will be a Gaussian process with independent increments. For piecewise constant integrands, this results from the fact that linear combinations of joint normal variables are themselves normal. The case for arbitrary deterministic integrands follows by taking limits. Also, the Ito isometry says that ${X_t-X_s}$ has variance

$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[\left(\int_s^t\xi\,dB\right)^2\right]&\displaystyle={\mathbb E}\left[\int_s^t\xi^2_u\,du\right]\smallskip\\ &\displaystyle=\theta(t)-\theta(s)\smallskip\\ &\displaystyle={\mathbb E}\left[(B_{\theta(t)}-B_{\theta(s)})^2\right]. \end{array}$

So, ${\int\xi\,dB=\int\sqrt{\theta^\prime(t)}\,dB_t}$ has the same distribution as the time-changed Brownian motion ${B_{\theta(t)}}$.

With the help of Lévy’s characterization, these ideas can be extended to more general, non-deterministic, integrands and to stochastic time-changes. In fact, doing this leads to the startling result that all continuous local martingales are just time-changed Brownian motion. Continue reading “Time-Changed Brownian Motion”

# Lévy’s Characterization of Brownian Motion

Standard Brownian motion, ${\{B_t\}_{t\ge 0}}$, is defined to be a real-valued process satisfying the following properties.

1. ${B_0=0}$.
2. ${B_t-B_s}$ is normally distributed with mean 0 and variance ts independently of ${\{B_u\colon u\le s\}}$, for any ${t>s\ge 0}$.
3. B has continuous sample paths.

As always, it only really matters is that these properties hold almost surely. Now, to apply the techniques of stochastic calculus, it is assumed that there is an underlying filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}$, which necessitates a further definition; a process B is a Brownian motion on a filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}$ if in addition to the above properties it is also adapted, so that ${B_t}$ is ${\mathcal{F}_t}$-measurable, and ${B_t-B_s}$ is independent of ${\mathcal{F}_s}$ for each ${t>s\ge 0}$. Note that the above condition that ${B_t-B_s}$ is independent of ${\{B_u\colon u\le s\}}$ is not explicitly required, as it also follows from the independence from ${\mathcal{F}_s}$. According to these definitions, a process is a Brownian motion if and only if it is a Brownian motion with respect to its natural filtration.

The property that ${B_t-B_s}$ has zero mean independently of ${\mathcal{F}_s}$ means that Brownian motion is a martingale. Furthermore, we previously calculated its quadratic variation as ${[B]_t=t}$. An incredibly useful result is that the converse statement holds. That is, Brownian motion is the only local martingale with this quadratic variation. This is known as Lévy’s characterization, and shows that Brownian motion is a particularly general stochastic process, justifying its ubiquitous influence on the study of continuous-time stochastic processes.

Theorem 1 (Lévy’s Characterization of Brownian Motion) Let X be a local martingale with ${X_0=0}$. Then, the following are equivalent.

1. X is standard Brownian motion on the underlying filtered probability space.
2. X is continuous and ${X^2_t-t}$ is a local martingale.
3. X has quadratic variation ${[X]_t=t}$.

# Quadratic Variations and Integration by Parts

A major difference between standard integral calculus and stochastic calculus is the existence of quadratic variations and covariations. Such terms show up, for example, in the stochastic version of the integration by parts formula.

For motivation, let us start by considering a standard argument for differentiable processes. The increment of a process ${X}$ over a time step ${\delta t>0}$ can be written as ${\delta X_t\equiv X_{t+\delta t}-X_t}$. The following identity is easily verified,

 $\displaystyle \delta XY = X\delta Y + Y\delta X + \delta X \delta Y.$ (1)

Now, divide the time interval ${[0,t]}$ into ${n}$ equal parts. That is, set ${t_k=kt/n}$ for ${k=0,1,\ldots,n}$. Then, using ${\delta t=1/n}$ and summing equation (1) over these times,

 $\displaystyle X_tY_t -X_0Y_0=\sum_{k=0}^{n-1} X_{t_k}\delta Y_{t_k} +\sum_{k=0}^{n-1}Y_{t_k}\delta X_{t_k}+\sum_{k=0}^{n-1}\delta X_{t_k}\delta Y_{t_k}.$ (2)

If the processes are continuously differentiable, then the final term on the right hand side is a sum of ${n}$ terms, each of order ${1/n^2}$, and therefore is of order ${1/n}$. This vanishes in the limit ${n\rightarrow\infty}$, leading to the integration by parts formula

 $\displaystyle X_tY_t-X_0Y_0 = \int_0^t X\,dY + \int_0^t Y\,dX.$

Now, suppose that ${X,Y}$ are standard Brownian motions. Then, ${\delta X,\delta Y}$ are normal random variables with standard deviation ${\sqrt{\delta t}}$. It follows that the final term on the right hand side of (2) is a sum of ${n}$ terms each of which is, on average, of order ${1/n}$. So, even in the limit as ${n}$ goes to infinity, it does not vanish. Consequently, in stochastic calculus, the integration by parts formula requires an additional term, which is called the quadratic covariation (or, just covariation) of ${X}$ and ${Y}$. Continue reading “Quadratic Variations and Integration by Parts”

# Integrating with respect to Brownian motion

In this post I attempt to give a rigorous definition of integration with respect to Brownian motion (as introduced by Itô in 1944), while keeping it as concise as possible. The stochastic integral can also be defined for a much more general class of processes called semimartingales. However, as Brownian motion is such an important special case which can be handled directly, I start with this as the subject of this post. If ${\{X_s\}_{s\ge 0}}$ is a standard Brownian motion defined on a probability space ${(\Omega,\mathcal{F},{\mathbb P})}$ and ${\alpha_s}$ is a stochastic process, the aim is to define the integral

 $\displaystyle \int_0^t\alpha_s\,dX_s.$ (1)

In ordinary calculus, this can be approximated by Riemann sums, which converge for continuous integrands whenever the integrator ${X}$ is of finite variation. This leads to the Riemann-Stietjes integral and, generalizing to measurable integrands, the Lebesgue-Stieltjes integral. Unfortunately this method does not work for Brownian motion which, as discussed in my previous post, has infinite variation over all nontrivial compact intervals.

The standard approach is to start by writing out the integral explicitly for piecewise constant integrands. If there are times ${0=t_0\le t_1\le\cdots\le t_n=t}$ such that ${\alpha_s=\alpha_{t_k}}$ for each ${s\in(t_{k-1},t_k)}$ then the integral is given by the summation,

 $\displaystyle \int_0^t\alpha\,dX = \sum_{k=1}^n\alpha_{t_k}(X_{t_k}-X_{t_{k-1}}).$ (2)

We could try to extend to more general integrands by approximating by piecewise constant processes but, as mentioned above, Brownian motion has infinite variation paths and this will diverge in general.

Fortunately, when working with random processes, there are a couple of observations which improve the chances of being able to consistently define the integral. They are

• The integral is not a single real number, but is instead a random variable defined on the probability space. It therefore only has to be defined up to a set of zero probability and not on every possible path of ${X}$.
• Rather than requiring limits of integrals to converge for each path of ${X}$ (e.g., dominated convergence), the much weaker convergence in probability can be used.

These observations are still not enough, and the main insight is to only look at integrands which are adapted. That is, the value of ${\alpha_t}$ can only depend on ${X}$ through its values at prior times. This condition is met in most situations where we need to use stochastic calculus, such as with (forward) stochastic differential equations. To make this rigorous, for each time ${t\ge 0}$ let ${\mathcal{F}_t}$ be the sigma-algebra generated by ${X_s}$ for all ${s\le t}$. This is a filtration (${\mathcal{F}_s\subseteq\mathcal{F}_t}$ for ${s\le t}$), and ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}$ is referred to as a filtered probability space. Then, ${\alpha}$ is adapted if ${\alpha_t}$ is ${\mathcal{F}_t}$-measurable for all times ${t}$. Piecewise constant and left-continuous processes, such as ${\alpha}$ in (2), which are also adapted are commonly referred to as simple processes.

However, as with standard Lebesgue integration, we must further impose a measurability property. A stochastic process ${\alpha}$ can be viewed as a map from the product space ${{\mathbb R}_+\times\Omega}$ to the real numbers, given by ${(t,\omega)\mapsto\alpha_t(\omega)}$. It is said to be jointly measurable if it is measurable with respect to the product sigma-algebra ${\mathcal{B}({\mathbb R}_+)\otimes\mathcal{F}}$, where ${\mathcal{B}}$ refers to the Borel sigma-algebra. Finally, it is called progressively measurable, or just progressive, if its restriction to ${[0,t]\times\Omega}$ is ${\mathcal{B}([0,t])\otimes\mathcal{F}_t}$-measurable for each positive time ${t}$. It is easily shown that progressively measurable processes are adapted, and the simple processes introduced above are progressive.

With these definitions, the stochastic integral of a progressively measurable process ${\alpha}$ with respect to Brownian motion ${X}$ is defined whenever ${\int_0^t\alpha^2ds<\infty}$ almost surely (that is, with probability one). The integral (1) is a random variable, defined uniquely up to sets of zero probability by the following two properties.

• The integral agrees with the explicit formula (2) for simple integrands.
• If ${\alpha^n}$ and ${\alpha}$ are progressive processes such that ${\int_0^t(\alpha^n-\alpha)^2\,ds}$ tends to zero in probability as ${n\rightarrow\infty}$, then
 $\displaystyle \int_0^t\alpha^n\,dX\rightarrow\int_0^t\alpha\,dX,$ (3)

where, again, convergence is in probability.

# The Pathological Properties of Brownian Motion

I turn away with fear and horror from the lamentable plague of continuous functions which do not have derivatives – Charles Hermite (1893)

Despite being of central importance to the theory of stochastic processes and to many applications in areas such as physics and economics, Brownian motion has some nasty properties such as being nowhere differentiable, which are in stark contrast to the usual well-behaved functions studied in elementary differential calculus. As I intend to post entries on stochastic calculus, it seems that a good place to start is by describing some of the properties of Brownian motion which rule out the use of the standard techniques of differential calculus. Strictly speaking, these properties should not really be regarded as pathological although they can seem so to someone not familiar with such processes and would have been regarded as such at the time of Hermite’s statement above.

Historically, the term `Brownian motion’ refers to the experiments performed by Robert Brown in 1827 where pollen and dust particles floating on the surface of water are observed to move about with a jittery motion. This was explained mathematically by Albert Einstein in 1905 and Marian Smoluchowski in 1906, and is caused by the particles being continuously bombarded by water molecules. Louis Bachelier also studied the mathematical properties of Brownian motion in 1900, applying it to the evolution of stock prices.

Mathematically, Brownian motion is a stochastic process whose increments are independent and identically distributed random variables, and which has continuous sample paths. In the case of the random motion of particles due to collisions with water molecules, as in the experiments performed by Robert Brown, each bombardment by a molecule will not produce a sudden change in the position of the particle. Instead, they will produce a sudden change in the particle’s velocity. So mathematical Brownian motion as described here is better used as model of the velocity of the particle rather than its position (even better – the velocity can be modeled by an Ornstein-Uhlenbeck process). More generally, it is used as a source of random noise in many models of physical and economic systems. It is also referred to as a Wiener process after Norbert Wiener and often represented using a capital W. Continue reading “The Pathological Properties of Brownian Motion”