Girsanov Transformations

Girsanov transformations describe how Brownian motion and, more generally, local martingales behave under changes of the underlying probability measure. Let us start with a much simpler identity applying to normal random variables. Suppose that X and {Y=(Y^1,\ldots,Y^n)} are jointly normal random variables defined on a probability space {(\Omega,\mathcal{F},{\mathbb P})}. Then {U\equiv\exp(X-\frac{1}{2}{\rm Var}(X)-{\mathbb E}[X])} is a positive random variable with expectation 1, and a new measure {{\mathbb Q}=U\cdot{\mathbb P}} can be defined by {{\mathbb Q}(A)={\mathbb E}[1_AU]} for all sets {A\in\mathcal{F}}. Writing {{\mathbb E}_{\mathbb Q}} for expectation under the new measure, then {{\mathbb E}_{\mathbb Q}[Z]={\mathbb E}[UZ]} for all bounded random variables Z. The expectation of a bounded measurable function {f\colon{\mathbb R}^n\rightarrow{\mathbb R}} of Y under the new measure is

\displaystyle  {\mathbb E}_{\mathbb Q}\left[f(Y)\right]={\mathbb E}\left[f\left(Y+{\rm Cov}(X,Y)\right)\right], (1)

where {{\rm Cov}(X,Y)} is the covariance. This is a vector whose i’th component is the covariance {{\rm Cov}(X,Y^i)}. So, Y has the same distribution under {{\mathbb Q}} as {Y+{\rm Cov}(X,Y)} has under {{\mathbb P}}. That is, when changing to the new measure, Y remains jointly normal with the same covariance matrix, but its mean increases by {{\rm Cov}(X,Y)}. Equation (1) follows from a straightforward calculation of the characteristic function of Y with respect to both {{\mathbb P}} and {{\mathbb Q}}.

Now consider a standard Brownian motion B and fix a time {T>0} and a constant {\mu}. Then, for all times {t\ge 0}, the covariance of {B_t} and {B_T} is {{\rm Cov}(B_t,B_T)=t\wedge T}. Applying (1) to the measure {{\mathbb Q}=\exp(\mu B_T-\mu^2T/2)\cdot{\mathbb P}} shows that

\displaystyle  B_t=\tilde B_t + \mu (t\wedge T)

where {\tilde B} is a standard Brownian motion under {{\mathbb Q}}. Under the new measure, B has gained a constant drift of {\mu} over the interval {[0,T]}. Such transformations are widely applied in finance. For example, in the Black-Scholes model of option pricing it is common to work under a risk-neutral measure, which transforms the drift of a financial asset to be the risk-free rate of return. Girsanov transformations extend this idea to much more general changes of measure, and to arbitrary local martingales. However, as shown below, the strongest results are obtained for Brownian motion which, under a change of measure, just gains a stochastic drift term.

As always, we work under a complete filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})}. Consider a new measure {{\mathbb Q}=U\cdot{\mathbb P}}, for some strictly positive random variable U with expectation 1. Then {{\mathbb P}} and {{\mathbb Q}} are equivalent. That is, {{\mathbb Q}(A)=0} if and only if {{\mathbb P}(A)=0} for sets {A\in\mathcal{F}}. By the Radon-Nikodym theorem, all probability measures on {(\Omega,\mathcal{F})} equivalent to {{\mathbb P}} can be defined in this way, and U is referred to as the Radon-Nikodym derivative of {{\mathbb Q}} with respect to {{\mathbb P}}, denoted by {d{\mathbb Q}/d{\mathbb P}}. Conditional expectations with respect to the new measure are related to the original one as follows.

Lemma 1 Let {{\mathbb Q}=U\cdot{\mathbb P}} be an equivalent measure to {{\mathbb P}}. Then, for any bounded random variable Z and sigma-algebra {\mathcal{G}\subseteq\mathcal{F}}, the conditional expectation is given by

\displaystyle  {\mathbb E}_{\mathbb Q}\left[Z\vert\mathcal{G}\right]=\frac{{\mathbb E}[UZ\vert\mathcal{G}]}{{\mathbb E}[U\vert\mathcal{G}]}. (2)

Proof: Denote the right-hand-side of (2) by Y, which is {\mathcal{G}}-measurable and satisfies {{\mathbb E}[UY\vert\mathcal{G}]={\mathbb E}[UZ\vert\mathcal{G}]}. So, for any {A\in\mathcal{G}},

\displaystyle  {\mathbb E}_{{\mathbb Q}}[1_AY]\displaystyle={\mathbb E}[1_AUY]={\mathbb E}[1_AUZ]={\mathbb E}_{\mathbb Q}[1_AZ]

which, by definition, means that {Y={\mathbb E}_{\mathbb Q}[Z\vert\mathcal{G}]}. \Box

Given a measure {{\mathbb Q}} equivalent to {{\mathbb P}}, define the martingale

\displaystyle  U_t={\mathbb E}\left[d{\mathbb Q}/d{\mathbb P}\;\middle\vert\mathcal{F}_t\right]. (3)

Note that there is symmetry here in exchanging the roles of {{\mathbb P}} and {{\mathbb Q}}. Using Lemma 1 with the simple identity {d{\mathbb P}/d{\mathbb Q}=(d{\mathbb Q}/d{\mathbb P})^{-1}},

\displaystyle  {\mathbb E}_{\mathbb Q}\left[d{\mathbb P}/d{\mathbb Q}\;\vert\mathcal{F}_t\right]={\mathbb E}\left[1\vert\mathcal{F}_t\right]/{\mathbb E}\left[d{\mathbb Q}/d{\mathbb P}\;\vert\mathcal{F}_t\right]=U_t^{-1}.

In particular, {U^{-1}} is a uniformly integrable martingale with respect to {{\mathbb Q}} so, if a cadlag version of U is used, then {U^{-1}} will be a cadlag martingale converging to the limit {{\mathbb E}_{\mathbb Q}[d{\mathbb P}/d{\mathbb Q}\vert\mathcal{F}_\infty]} and {\sup_tU_t^{-1}} is finite.

We can now answer the following question — when is a process X a martingale under the equivalent measure {{\mathbb Q}}?

Lemma 2 Let {{\mathbb Q}} be an equivalent measure to {{\mathbb P}}, and {U_t} be as in (3). Then, a process X is a {{\mathbb Q}}-martingale if and only if UX is a {{\mathbb P}}-martingale.

Proof: Set {M=UX}. Then, X is adapted if and only if M is adapted. Also, {{\mathbb E}_{\mathbb Q}\vert X_t\vert={\mathbb E}\vert M_t\vert}, so X is integrable under {{\mathbb Q}} if and only if M is integrable under {{\mathbb P}}. Using (2) for the conditional expectation,

\displaystyle  {\mathbb E}[M_t\vert\mathcal{F}_s]={\mathbb E}[U_tX_t\vert\mathcal{F}_s]=U_s{\mathbb E}_{\mathbb Q}[X_t\vert\mathcal{F}_s]

for any {s\le t}. So, {{\mathbb E}[M_t\vert\mathcal{F}_s]=M_s} if and only if {{\mathbb E}_{\mathbb Q}[X_t\vert\mathcal{F}_s]=X_s}. \Box

Local Martingales

Lemma 2 can be localized to obtain a condition for a cadlag adapted process to be a local martingale with respect to {{\mathbb Q}}. First, as the process U defined by (3) is a martingale, it has a cadlag modification whenever it is right-continuous in probability. In particular, if the filtration is right-continuous then U always has a cadlag modification.

Lemma 3 Let {{\mathbb Q}} be an equivalent measure to {{\mathbb P}}, and suppose that U given by (3) is cadlag. Then, a cadlag adapted process X is a {{\mathbb Q}}-local martingale if and only if UX is a {{\mathbb P}}-local martingale.

Proof: Replacing X by {X-X_0} if necessary, suppose that {X_0=0}. Given a stopping time {\tau}, we first show that the stopped process {(UX)^{\tau}} is a martingale if and only if {UX^{\tau}} is a martingale which, by Lemma 2, is equivalent to {X^\tau} being a {{\mathbb Q}}-martingale.

As U is a nonnegative martingale, optional sampling gives {{\mathbb E}[U_t\vert\mathcal{F}_\tau]=U_{t\wedge\tau}} and,

\displaystyle  {\mathbb E}\left[\vert U_t X^\tau_t\vert\right]={\mathbb E}\left[\vert U_{t\wedge\tau}X^\tau_t\vert\right]={\mathbb E}\left[\vert (UX)^\tau_t\vert\right].

So {UX^\tau} is integrable if and only if {(UX)^\tau} is. In this case, let M be the difference {M=UX^\tau-(UX)^\tau=(U-U^\tau)X^\tau}. For times {s\le t},

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}[M_t\vert\mathcal{F}_s]&\displaystyle={\mathbb E}\left[(U_t-U_{t\wedge\tau})X_{t\wedge\tau}\;\middle\vert\mathcal{F}_s\right]\smallskip\\ &\displaystyle={\mathbb E}\left[{\mathbb E}[U_t-U_{t\wedge\tau}\vert\mathcal{F}_{t\wedge(s\vee\tau)}]X_{t\wedge\tau}\;\middle\vert\mathcal{F}_s\right]\smallskip\\ &\displaystyle={\mathbb E}\left[(U_{t\wedge(s\vee\tau)}-U_{t\wedge\tau})X_{t\wedge\tau}\;\middle\vert\mathcal{F}_s\right]\smallskip\\ &\displaystyle=(U_s-U_{s\wedge\tau})X_\tau=M_s. \end{array}

Therefore, M is a martingale, and {UX^\tau} is a martingale if and only if {(UX)^\tau} is.

So, given stopping times {\tau_n\uparrow\infty}, {X^{\tau_n}} are {{\mathbb Q}}-martingales if and only if {UX^{\tau_n}} and, therefore, {(UX)^{\tau_n}} are {{\mathbb P}}-martingales. \Box

If X is a local martingale, then Lemma 3 can be used to derive a decomposition of X into the sum of a {{\mathbb Q}}-local martingale and an FV process defined in terms of the quadratic covariation [U,X].

Theorem 4 Let {{\mathbb Q}} be an equivalent measure to {{\mathbb P}}, and suppose that U given by (3) is cadlag. If X is a local martingale then, {X=Y+V} where Y is a {{\mathbb Q}}-local martingale and V is the FV process

\displaystyle  V=\int U^{-1}\,d[U,X].

Proof: It is just necessary to show that {Y=X-V} is a local martingale under {{\mathbb Q}}. Applying integration by parts,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle d(UV) &\displaystyle= U\,dV + V_-\,dU\smallskip\\ &\displaystyle=d[U,X]+V_-\,dU \end{array}

As U is a martingale, this shows that UV-[U,X] is a local martingale. Also, UX-[U,X] is a local martingale, so

\displaystyle  UY=(UX-[U,X])-(UV-[U,X])

is a local martingale as required. \Box

Continuous Local Martingales

A useful method of constructing measure changes is to use a Doléans exponential. For a local martingale M this is the solution to the stochastic differential equation {dU=U_-\,dM} with initial condition {U_0=1} so, by preservation of the local martingale property, U is a local martingale. For continuous local martingales, the Doléans exponential {\mathcal{E}(M)_t\equiv U_t} is given by

\displaystyle  \mathcal{E}(M)_t=\exp\left(M_t-M_0-\frac{1}{2}[M]_t\right).

In particular, if the quadratic variation at infinity, {[M]_\infty}, is finite then the limit {M_\infty=\lim_{t\rightarrow\infty}M_t} exists and {U_\infty} will be strictly positive. If, furthermore, U is a uniformly integrable martingale rather than just a local martingale then, {U_t=\lim_{s\rightarrow\infty}{\mathbb E}[U_s\vert\mathcal{F}_t]={\mathbb E}[U_\infty\vert\mathcal{F}_t]} for each time t. So {{\mathbb Q}=U_\infty\cdot{\mathbb P}} defines an equivalent measure with U satisfying equation (3). Also, for any {{\mathbb P}}-local martingale X, {d[U,X]=U\,d[M,X]}, and Theorem 4 shows that {M-[M,X]} is a {{\mathbb Q}}-local martingale. Applying this with {M=\int\xi\,dX} gives the following.

Theorem 5 (Girsanov transformation) Let X be a continuous local martingale, and {\xi} be a predictable process such that {\int_0^\infty\xi^2\,d[X]<\infty}. If {U\equiv\mathcal{E}(\int\xi\,dX)} is a uniformly integrable martingale then {{\mathbb E}[U_\infty]=1} and the measure {{\mathbb Q}=U_\infty\cdot{\mathbb P}} is equivalent to {{\mathbb P}}. Then, X decomposes as

\displaystyle  X=Y+\int\xi\,d[X] (4)

for a {{\mathbb Q}}-local martingale Y.

So, Girsanov transformations allow us to change to an equivalent measure where the local martingale X gains a drift term which is an integral with respect to its quadratic variation [X]. In fact, continuous local martingales always decompose as in (4) under any continuous change of measure.

In the following, it is required that we take a cadlag version of the martingale U, which is guaranteed to exist if the filtration {\{\mathcal{F}_t\}_{t\ge 0}} is right-continous. However, in these notes we are not assuming that filtrations are right-continuous. Still, it is always possible to pass to the right-continuous filtration {\mathcal{F}_{t+}=\bigcap_{s>t}\mathcal{F}_s}. Then, a continuous process starting at zero will be {\mathcal{F}_{t+}}-adapted if and only if it is {\mathcal{F}_t}-adapted, and the two filtrations define the same space of continuous local martingales starting from 0. So Theorem 6 can be applied to arbitrary equivalent changes of measure on all complete filtered probability spaces.

Theorem 6 Let {{\mathbb Q}} be an equivalent measure to {{\mathbb P}}, and suppose that U given by (3) has a cadlag version. Then, there is a predictable process {\xi} satisfying {\int_0^\infty\xi^2\,d[X]<\infty} and {d[U,X]=U\xi\,d[X]}, in which case

  • X decomposes as {X=Y+\int\xi\,d[X]} for a {{\mathbb Q}}-local martingale Y.
  • U decomposes as {U=V\mathcal{E}(\int\xi\,dX)} for a nonnegative local martingale V with {\rm[V,X]=0}.

Note that if {\mathcal{E}(\int\xi\,dX)} is a uniformly integrable martingale, then the decomposition given for U implies that the change of measure can be decomposed into a Girsanov transformation, precisely as in Theorem 6, followed by a measure change given by the process V satisfying [V,X]=0. In general, however, this will not be the case since {\mathcal{E}(\int\xi\,dX)} need only be a local martingale.

Proof: By Theorem 4, X=Y+V for a {{\mathbb Q}}-local martingale Y and FV process {V\equiv\int U^{-1}\,d[U,X]}. Next, by the Kunita-Watanabe inequality, if {\zeta} is a nonnegative process satisfying {\int\zeta\,d[X]=0} then,

\displaystyle  \int \zeta\,\vert dV\vert\le\left(\int U^{-2}\,d[U]\int\zeta^2\,d[X]\right)^\frac12=0.

That is, V is absolutely continuous with respect to [X]. We would like to use a stochastic version of the Radon-Nikodym theorem to imply the existence of a predictable process {\xi} with {V=\int\xi\,d[X]}. In fact, this is possible as stated below in Lemma 7, so {X=Y+\int\xi\,d[X]} as required. It still needs to be shown that {\int_0^\infty\xi^2\,d[X]} is finite and that U satisfies the required decomposition.

Next, we show that {[U]_\infty} is finite. As {U_t={\mathbb E}[U_\infty\vert\mathcal{F}_t]}, the process

\displaystyle  \tilde U_t=\begin{cases} U_{t/(1-t)},&t<1,\\ U_\infty,&t\ge 1, \end{cases}

is a martingale, under its natural filtration. As cadlag martingales are semimartingales, and have well defined quadratic variation, {[U]_\infty=[\tilde U]_1} is finite. Then, applying the Kunita-Watanabe inequality again, for any positive constant K,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle\int1_{\{\vert\xi\vert<K\}}\xi^2\,d[X]&\displaystyle=\int1_{\{\vert\xi\vert<K\}}\xi\,dV\smallskip\\ &\displaystyle=\int1_{\{\vert\xi\vert<K\}}\xi U^{-1}\,d[U,X]\smallskip\\ &\displaystyle\le\left(\int U^{-2}\,d[U]\int1_{\{\vert\xi\vert<K\}}\xi^2\,d[X]\right)^\frac12. \end{array}

Letting K increase to infinity and squaring this inequality gives,

\displaystyle  \int_0^\infty\xi^2\,d[X]\le\int_0^\infty U^{-2}\,d[U]\le(\sup_tU_t^{-2})[U]_\infty.

This is finite, since it has been shown that {[U]_\infty} is finite and {U^{-1}} is a cadlag {{\mathbb Q}}-martingale tending to the finite limit {U^{-1}_\infty}, so is bounded.

Finally, as {\int_0^t\xi^2\,d[X]} is finite for all times t, {\xi} is X-integrable. Define the local martingales {M=\int U^{-1}_-\,dU} and {N=\int\xi\,dX}. The quadratic covariation is given by

\displaystyle  [M,X]=\int U^{-1}\,d[U,X]=V=\int\xi\,d[X]=[N,X].

So, [MN,X]=0, {[M-N,N]=\int\xi\,d[M-N,X]=0}, and integration by parts applied to the definition of Doléans exponentials gives

\displaystyle  U=U_0\mathcal{E}(M)=U_0\mathcal{E}(M-N)\mathcal{E}(N).

The decomposition of U follows by taking {V=U_0\mathcal{E}(M-N)}. \Box

The following stochastic version of the Radon-Nikodym theorem was used in the proof of Theorem 6, which we now prove.

Lemma 7 Let A be a continuous FV process and B be a continuous adapted increasing process such that {\int_0^t\xi\,dA=0} (almost surely) for all {t\ge 0} and bounded nonnegative predictable {\xi} satisfying {\int_0^t\xi\,dB=0}.

Then, there is a predictable process {\alpha} satisfying {\int_0^t\vert\alpha\vert\,dB<\infty}, and {A=A_0+\int\alpha\,dB}.

Proof: Let us first suppose that A and B have integrable variation and, without loss of generality, assume that {A_0=B_0=0}. Then, we can define the following finite signed measures on the predictable measurable space {({\mathbb R}_+\times\Omega,\mathcal{P})},

\displaystyle  \mu(\xi)={\mathbb E}\left[\int_0^\infty\xi\,dA\right],\ \nu(\xi)={\mathbb E}\left[\int_0^\infty\xi\,dB\right]

for bounded predictable {\xi}. As B is increasing, {\nu} is a (nonnegative) measure. If {\nu(S)=0} for a predictable set S, then {\int 1_S\,dB=0} and, from the condition of the lemma, {\int 1_S\,dA=0}, giving {\mu(S)=0}. So, {\mu} is absolutely continuous with respect to {\nu} and the Radon-Nikodym derivative {\alpha=d\mu/d\nu} exists. This is a predictable process satisfying {\nu(\vert\alpha\vert)<\infty} and {\mu(\xi)=\nu(\alpha\xi)} for all bounded predictable {\xi}.

Then, the following process has integrable variation

\displaystyle  M=A-\int\alpha\,dB

and, for any bounded predictable {\xi},

\displaystyle  {\mathbb E}\left[\int_0^\infty\xi\,dM\right]=\mu(\xi)-\nu(\xi\alpha)=0.

This shows that M is a martingale. As continuous FV local martingales are constant, M is identically 0, giving {A=\int\alpha\,dB} as required.

Finally, let us drop the assumption that A and B have integrable variation, and define the stopping times

\displaystyle  \tau_n=\inf\left\{t\ge 0\colon\int_0^t\,\vert dA\vert+B_t\ge n\right\}.

By continuity, the stopped processes {A^{\tau_n}} and {B^{\tau_n}} have variation bounded by n so, by the above argument, there are predictable processes {\alpha^1,\alpha^2,\ldots} such that {A^{\tau_n}=\int\alpha^n\,dB^{\tau_n}}. The result now follows by taking {\alpha=\sum_n 1_{(\tau_{n-1},\tau_n]}\alpha^n}. \Box

One difficulty in applying Theorem 5 to construct measure changes is that the Doléans exponential is only guaranteed to be a local martingale, whereas we need it to be a uniformly integrable martingale. The following gives a necessary and sufficient condition for a nonnegative local martingale to be a uniformly integrable martingale.

Lemma 8 Let U be a nonnegative local martingale with {{\mathbb E}[U_0]=1}. Then, {{\mathbb E}[U_\tau]\le 1} for all stopping times {\tau}, and U is a uniformly integrable martingale if and only if {{\mathbb E}[U_\infty]=1}.

Proof: Choose stopping times {\tau_n\uparrow\infty} such that {U^{\tau_n}} are uniformly integrable martingales. By Fatou’s lemma and optional sampling, for stopping times {\sigma\le\tau},

\displaystyle  U_\sigma=\lim_{n\rightarrow\infty}U^{\tau_n}_\sigma=\lim_{n\rightarrow\infty}{\mathbb E}\left[U^{\tau_n}_\tau\vert\mathcal{F}_\sigma\right]\ge{\mathbb E}\left[U_\tau\vert\mathcal{F}_\sigma\right].

So, U is a supermartingale. Taking expectations with {\sigma=0} gives {1\ge{\mathbb E}[U_\tau]}. Conversely, if {{\mathbb E}[U_\infty]=1} then, using {\tau=\infty}, {U_\sigma-{\mathbb E}[U_\infty\vert\mathcal{F}_\sigma]} is a nonnegative random variable with expectation

\displaystyle  {\mathbb E}\left[ U_\sigma-{\mathbb E}[U_\infty\vert\mathcal{F}_\tau]\right]={\mathbb E}[U_\sigma]-{\mathbb E}[U_\infty]\le 0.

So, {U_\sigma={\mathbb E}[U_\infty\vert\mathcal{F}_\sigma]}, showing that U is a uniformly integrable martingale. \Box

This lemma is useful in theory but, in practice, the expectation of {U_\infty} is often hard to calculate directly. Instead, the following sufficient conditions can be used to show that a Doléans exponential is a uniformly integrable martingale. Condition (5) is Kazamaki’s criterion and (6) is Novikov’s criterion.

Lemma 9 Let M be a continuous local martingale with {M_0=0}. The following is a sufficient condition for {\mathcal{E}(M)} to be a uniformly integrable martingale,

\displaystyle  \sup_\tau{\mathbb E}\left[e^{\frac12M_\tau}\right]<\infty (5)

where the supremum is taken over all bounded stopping times {\tau}. In particular, this condition is satisfied and {\mathcal{E}(M)} is a uniformly integrable martingale, whenever

\displaystyle  {\mathbb E}\left[e^{\frac12[M]_\infty}\right]<\infty. (6)

Proof: The following simple identity for a constant r will be used

\displaystyle  \mathcal{E}(rM)=\mathcal{E}(M)^{r^2}e^{r(1-r)M}.

Suppose that {{\mathbb E}[\exp(M_\tau/2)]\le K} for some constant K and all bounded stopping times {\tau}. Then choose real numbers {0<a<1} and {p,q,r>1} with {1/p+1/q=1}. Holder’s inequality gives

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[\mathcal{E}(aM)^\frac{r^2}p_\tau\right]&\displaystyle={\mathbb E}\left[\mathcal{E}(aM)_\tau^{\frac{r^2}{p}} e^{\frac{r}{p}(1-r)aM_\tau}e^{\frac{r}{p}(r-1)aM_\tau}\right]\smallskip\\ &\displaystyle\le{\mathbb E}[\mathcal{E}(aM)^{r^2}_\tau e^{r(1-r)aM_\tau}]^\frac1p{\mathbb E}[e^{\frac{q}{p}r(r-1)aM_\tau}]^\frac1q\smallskip\\ &\displaystyle={\mathbb E}[\mathcal{E}(raM)_\tau]^\frac1p{\mathbb E}[e^{\frac{r-1}{p-1}raM_\tau}]^\frac1q\smallskip\\ &\displaystyle\le{\mathbb E}\left[\exp\left(\frac{r-1}{p-1}raM_\tau\right)\right]^\frac1q \end{array}

Lemma 8 has been applied here to bound the expectation of {\mathcal{E}(raM)} by 1. Setting {p=1+2(r-1)ra} then {s\equiv r^2/p>1} for r close to 1 and {\frac{r-1}{p-1}ra=\frac12} giving,

\displaystyle  {\mathbb E}\left[\mathcal{E}(aM)_\tau^s\right]\le{\mathbb E}\left[e^{\frac12M_\tau}\right]^\frac1q\le K^\frac1q.

So, {\mathcal{E}(aM)} is an {L^s}-bounded martingale and hence is uniformly integrable. Therefore, {{\mathbb E}[\mathcal{E}(aM)_\infty]=1} for all {0<a<1}

Next, using Holder’s inequality,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} 1&={\mathbb E}[\mathcal{E}(aM)_\infty]={\mathbb E}[\mathcal{E}(M)^{a^2}_\infty e^{a(1-a)M_\infty}]\smallskip\\ &\le{\mathbb E}[\mathcal{E}(M)_\infty]^{a^2}{\mathbb E}[e^{\frac{a}{1+a}M_\infty}]^{1-a^2}\smallskip\\ &\le{\mathbb E}[\mathcal{E}(M)_\infty]^{a^2}{\mathbb E}[e^{\frac12M_\infty}]^{2a(1-a)} \end{array}

The last inequality here is just Jensen’s inequality, using the fact that {a/(1+a)<1/2}. Letting a increase to 1 gives {{\mathbb E}[\mathcal{E}(M)_\infty]\ge 1} so, by Lemma 8, {\mathcal{E}(M)} is a uniformly integrable martingale as required.

Finally, suppose that (6) is satisfied. Then, for a stopping time {\tau}, the Cauchy-Schwarz inequality gives

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} {\mathbb E}\left[e^{\frac{1}{2}M_\tau}\right]&={\mathbb E}\left[e^{\frac{1}{2}M_\tau-\frac{1}{4}[M]_\tau}e^{\frac{1}{4}[M]_\tau}\right]\smallskip\\ &\le{\mathbb E}\left[\mathcal{E}(M)_\tau\right]^\frac12{\mathbb E}\left[e^{\frac{1}{2}[M]_\tau}\right]^\frac12\smallskip\\ &\le{\mathbb E}\left[e^{\frac12[M]_\infty}\right]^\frac12 \end{array}

so (5) holds, as required. \Box

Brownian Motion

With the help of Lévy’s characterization, the results above can be strengthened significantly when the local martingale is a Brownian motion. If B is a Brownian motion, then it is possible to construct an equivalent measure change under which it decomposes as the sum of a Brownian motion and the absolutely continuous process {\int\xi_s\,ds}, for a given predictable process {\xi}. This is stated, more generally for d-dimensional Brownian motion, in the following theorem.

Theorem 10 Let {B=(B^1,\ldots,B^d)} be a standard d-dimensional Brownian motion on the underlying filtered probability space and {\{\xi^i\}_{i=1,\ldots,d}} be predictable processes satisfying {\int_0^\infty(\xi^i_s)^2\,ds<\infty} (almost surely). If

\displaystyle  U_t\equiv\exp\left(\sum_{i=1}^d\int_0^t\xi^i\,dB-\frac{1}{2}\sum_{i=1}^d\int_0^t(\xi^i_s)^2\,ds\right)

is a uniformly integrable martingale then {{\mathbb E}[U_\infty]=1} and the measure {{\mathbb Q}=U_\infty\cdot{\mathbb P}} is equivalent to {{\mathbb P}}. Then, B decomposes as

\displaystyle  B^i=\tilde B^i+\int\xi^i_s\,ds (7)

for a d-dimensional Brownian motion {\tilde B} with respect to {{\mathbb Q}}.

Here, U is the Doléans exponential {\mathcal{E}(M)} for the local martingale {M=\sum_i\int\xi^i\,dB^i} and, by Novikov’s criterion (6) above, U will be a uniformly integrable martingale whenever {\exp(\frac12\sum_i\int_0^\infty(\xi^i_s)^2\,ds)} has finite expectation.

Proof: As Brownian motion has quadratic variation {[B^i]_t=t}, the condition on {\xi^i} ensures that it is {B^i}-integrable, so we can define the continuous local martingale {X=\sum_i\int\xi^i\,dB^i}. Applying Theorem 5 to X, {U=\exp(X-[X]/2)} is a local martingale and, if it is a uniformly integrable martingale then {{\mathbb E}[U_\infty]=1} and {{\mathbb Q}=U_\infty\cdot{\mathbb P}} defines a continuous measure change.

By Theorem 4, the decomposition {B^i=\tilde B^i +V^i} exists for a {{\mathbb Q}}-local martingale {\tilde B^i} and, as {[B^i,B^j]_t=\delta_{ij}t},

\displaystyle  V^i=\int U^{-1}\,d[U,B^i]=\sum_j\int\xi^j\,d[B^j,B^i]=\int\xi^i_s\,ds.

Finally, as {V^i} are continuous FV processes they do not contribute to quadratic covariations involving {\tilde B^i}, giving {[\tilde B^i,\tilde B^j]_t=[B^i,B^j]_t=\delta_{ij}t}. So, by Lévy’s characterization, {\tilde B} is a d-dimensional Brownian motion under {{\mathbb Q}}. \Box

Finally, Brownian motion transforms according to (7) under all continuous measure changes. That is, it picks up a drift {\xi} satisfying {\int_0^\infty\xi^2_s\,ds<\infty}.

Theorem 11 Let {{\mathbb Q}} be an equivalent measure to {{\mathbb P}}, and suppose that U given by (3) is cadlag. If {B=(B^1,\ldots,B^d)} is a standard d-dimensional Brownian motion on the underlying filtered probability space, then there are predictable processes {\{\xi^i\}_{i=1,\ldots,d}} satisfying {\int_0^\infty(\xi^i_s)^2\,ds<\infty} and {d[U,B^i]_t=U_{t}\xi^i_t\,dt}. Then

  • B decomposes as {B^i=\tilde B^i +\int\xi^i_s\,ds} for a standard d-dimensional Brownian motion {\tilde B} with respect to {{\mathbb Q}}.
  • U decomposes as {U=V\mathcal{E}(M)} where {M=\sum_i\int\xi^i\,dB^i} and V is a positive local martingale with {[V,B^i]=0}.

Proof: As Brownian motion has quadratic variation {[B^i]_t=t}, the existence of predictable processes {\xi^i} satisfying {\int_0^\infty(\xi^i_s)^2\,ds<\infty} and {d[U,B^i]_t=U^i_t\xi^i_t\,dt} is given by Theorem 6. Also, by the same theorem, {B^i=\tilde B^i+\int\xi^i_s\,ds} for {{\mathbb Q}}-local martingales {\tilde B^i}. As continuous finite variation processes do not contribute to quadratic covariations, {[\tilde B^i,\tilde B^j]_t=[B^i,B^j]_t=\delta_{ij}t} and Lévy’s characterization shows that {\tilde B} is a d-dimensional Brownian motion under {{\mathbb Q}}.

Finally, defining the local martingales {M=\int U^{-1}_-\,dU} and {N=\sum_i\int\xi^i\,dB^i},

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle [M,B^i]&\displaystyle=\int U^{-1}_-\,d[U,B^i]=\int\xi^i_t\,dt\smallskip\\ &\displaystyle=\sum_j\int\xi^j\,d[B^j,B^i]=[N,B^i]. \end{array}

So, {[M-N,B^i]=0} and {[M-N,N]=\sum_i\int\xi^i\,d[M-N,B^i]=0} giving,

\displaystyle  U=U_0\mathcal{E}(M)=U_0\mathcal{E}(M-N)\mathcal{E}(N)

and the decomposition for U follows by taking {V=U_0\mathcal{E}(M-N)}. \Box

25 thoughts on “Girsanov Transformations

  1. Your posts are always enlightening!
    I was just wondering if there was an available pdf version of your blog entries, somewhere ?

  2. Hi. No, I don’t have a PDF version. This has been asked before though, so it’s probably worth creating some. Maybe have something ready in the next week or two (starting with the “filtrations and processes” section).

  3. Hopefully I got the Latex now right….

    Great Blog indeed,

    I recently discussed with some friends from uni (physicists) a question that was losely related to the Girsanov theorem.

    Let Z_T=exp(-\int_0^T \psi_u dW_u - .5 \int_0^T \psi_u^2du) denote the Girsanov density of a measure R with respect to another measure P, where \psi is any process such that the Girsanovs theorem is valid.

    Then the information entropy between the two measures is defined through

    E^P[Z_T \log Z_T]

    which equates to

    E^P[Z_T \log Z_T]= E^R[ -\int_0^T \psi_u d \hat W_u + .5 \int_0^T \psi_u^2du ] for some R Brownian motion \hat W.

    We now discussed in great length if E^R[ -\int_0^T \psi_u d \hat W_u + .5 \int_0^T \psi_u^2du ] < \infty implies E^R[.5 \int_0^T \psi_u^2du ] < \infty.

    What about if \int_0^T \psi_u d \hat W_u is replaced by a continuous, general martingale?

    Opinions basically ranged from this is trivially true (as the finiteness is closely related to square integrability) to the complete opposite.

    Do you have any hints or ideas on that?

    Cheers

    Roger

    1. The answer to your question is yes, it is true! I don’t know if there is a `trivial’ argument, but I can give you a proof. It is also true for general continuous local martingales.
      One thing though. It is not absolutely clear when you say that -\int_0^T\psi\,d\hat W+\frac12\int_0^T\psi^2_u\,du has expectation less than infinity, do you mean its absolute value is integrable, or its positive part is integrable. (Taking an expectation of a nonintegrable random variable is not well defined in general, unless it is nonnegative). I’ll assume you mean its positive part, as this is the weaker condition and is still enough to imply what you want.

      For the proof: Let W be a standard Brownian motion, b > 0 be a constant, and set Xt = Wt – bt. Denote its maximum as Xt* = sups≤tXs. I claim that X* is integrable. Letting Ta be the first time at which X hits a positive value a, this is the same as the first time W hits the sloping line a + bt. The distribution of Ta is given by

      \displaystyle\mathbb{E}\left[\exp(-\theta T_a)\right]=\exp\left(-a\left(b+\sqrt{b^2+2\theta}\right)\right)

      which is a standard result. Letting θ go to 0 gives

      \displaystyle\mathbb{P}(X^*_\infty\ge a)=\mathbb{P}(T_a<\infty)=\exp\left(-2ab\right)

      Integrating with respect to a gives \mathbb{E}[X^*_\infty]=\frac{1}{2b}<\infty.

      Next, if M is a continuous local martingale starting at zero and X = M-b[M] then \mathbb{E}[X^*_\infty]\le\frac{1}{2b}. This follows because all such continuous local martingales are time changes of Brownian motion. In particular, (MT – b[M]T)+ is integrable for any time T.

      So, if M is a continuous local martingale starting at 0 and such that (-MT +[M]T/2)+ is integrable then, choosing 0<b<1/2,

      \displaystyle\left(\frac12-b\right)\mathbb{E}\left[[M]_T\right]\le\mathbb{E}\left[(-M_T+[M]_T/2)_++(M_T-b[M]_T)_+\right]<\infty.

      This gives what you want.

      Regards,
      George

      [btw, I deleted your first post.]

    2. I should add though, your question is indeed trivial in the case where M=\int\psi\,d\hat W is a martingale. However, for the Girsanov transformation to be defined, this need not be true. It is only guaranteed that M is a local martingale.

    3. Alternatively, there is the following much quicker argument. If b > 0 then exp(2b(M – b[M])) is a positive local martingale and, hence, is a supermartingale. So its expectation is bounded (by 1) and, as exponentials grow faster than linearly, (M – b[M])+ has finite expectation.

  4. Great,

    thanks for having a clear argument for that. And I meant indeed local martingale – that was actually the crucial point 🙂

    Roger

  5. Hello George,

    Just wonder what property of a diffusion process is preserved after the absolutely continuous measure change. For example, if X_t is an ergodic diffusion (which means it has stationary distributions when time goes to infinity), then after the measure change, its drift part is changed and we denote it as Y_t. Then is Y_t still ergodic? Can you give me a reference on this or a counter-example?

    Btw, I always wonder what the Girsanov theorem behave when we push the time to infinity. I see some authors use Girsanov for optimal stopping problems, and that involves defining the Radon -ikodym derivative process up to a random stopping time. I am uneasy with this and wonder whether there is some reference on using Girsanov theorem up to a random time rather than the fixed time case.

    Thank you very much~

    Rocky 🙂

    1. Hi Rocky,

      Apologies for the slow response. I haven’t had much time to log on, and don’t have much time now, but I’ll try and quickly answer.

      I’m not familiar with ergodic diffusions. But, I think that you maybe mean that the distribution of Xt tends weakly to a limit as t goes to infinity. Or, that T^{-1}\int_0^T f(X_t)\,dt\to\mu(f) (in probability) as T goes to infinity (where f is a bounded continuous function and \mu is the limiting distribution). I don’t think that either of these are going to be affected by an absolutely continuous change of measure. Assuming the limit is independent of \mathcal{F}_t, then you can approximate the Radon-Nikodym derivative d\mathbb{Q}/d\mathbb{P}=X in L1 by X_s=\mathbb{E}[X\mid\mathcal{F}_s]. Using \mathbb{Q}=X_s\cdot\mathbb{P} will not change the limiting distribution. Then consider s large.

      For the second question. Over a finite horizon, say, you can transform a standard Brownian motion into one with constant nonzero drift with a Girsanov transform. Over an infinite time horizon, this is not possible with an absolutely continuous change of measure. This is because events such as \{\liminf_{t\to\infty}\vert B_t\vert=0\} have probability one for a Brownian motion, but zero for a BM with drift. You can do it by a local Girsanov transform though, by defining \mathbb{Q}\vert_{\mathcal{F}_t}=X_t\cdot\mathbb{P}\vert_{\mathcal{F}_t}, where X_t is the martingale defining the measure change. You have to be careful in the choice of underlying probability space, and not complete the filtration (as the measure change is not absolutely continuous, you need to be careful about null sets under the original measure which could have positive probability under the new measure). You can also apply Girsanov transforms up to a stopping time T, which is similar to just applying it to the process stopped at time T. You can even apply a Girsanov tranform up to a sequence of stopping times Tn increasing to a limit T, even though the the change of measure might not be absolutely continuous on \mathcal{F}_T. For example, I did this in one of my posts here (Zero-Hitting and Failure of the Martingale Property). In fact, the stopping time can be almost-surely infinite under the original measure and yet almost surely finite in the transformed measure so, again, you have to be careful. I might come back to this and check out some references when I have time.

      1. Hello George,

        Here is what I understand:
        Approximate \frac{dQ}{dP}=X by X_s =E[X\mid \mathbb{F}_s]. Assume the limiting random variable of X_t is \bar{X} under measure P, then for any \epsilon>0,

        \setlength\arraycolsep{2pt}\begin{array}{rl} \displaystyle\lim_{t\rightarrow \infty} E^Q [1_{\mid X_t -\bar{X}  \mid \geq \epsilon}]&\displaystyle=\lim_{t\rightarrow \infty} E^P [X_s 1_{\mid X_t -\bar{X}  \mid \geq \epsilon}]\smallskip\\ &\displaystyle=\lim_{t\rightarrow \infty}( E^P [X_s]\times E^P [ 1_{\mid X_t -\bar{X}  \mid \geq \epsilon}])\smallskip\\ &\displaystyle= E^P [X_s]\times \lim_{t\rightarrow \infty}E^P [ 1_{\mid X_t -\bar{X}  \mid \geq \epsilon}]\smallskip\\ &\displaystyle=0. \end{array}

        The last equality is because of the ergodicity of X_t under measure P.

        My question is what is the requirement on s? Do we need to restrict s>t in order to transform the time t process X_t? If so, then s\rightarrow \infty and can we apply Girsanov theorem in this case?

        Thanks!

        1. By the way, how to display latex formulas on the webpage, I know it may be silly to ask, but I never post formula on blogs myself before…

          Thanks for your reply and I am not in a hurry to know the answer~ I know you are quite busy 🙂

        2. You start the latex expression with ‘$latex ‘ (the space is needed) and close with ‘$’. You can’t do displaymath – not directly anyway. I’ll edit the latex in your post when I log on later (and I might add a page about posting latex. It’s not obvious. Edit: Here it is!)

        3. Hi.

          Sorry about the delay in answering. I did read your comment earlier, but was not really sure what you were asking (I’m still not sure). Here, X is the process defining the Girsanov transform, but it is also the process which you want to transform the law of (in general, they will be different). Have I got that right? If Q is equivalent to P then any process which converges to a limit under P also converges to the same limit under Q. That much is true, and is a consequence of them having the same events of probability 1. However, I didn’t think that “ergodic diffusion” referred to convergence of the process, just convergence of the distribution. That can change but, if you assume that Xt becomes independent of Xs in the limit as t goes to infinity (and fixed s), then my argument above was that the distribution of Xt must also have the same limit under both measures. However, this does not hold if you consider “local” Girsanov transformations.

  6. Hello George,

    Sorry about the confusion. Here the process X_t is a martingale. It is used in defining the Radon-Nikodym derivative governing the measure change. It is also the process to which we apply this measure change. This set up is a bit strange and the origin comes from “change of numeaire” technique in finance.

    Ergodic diffusion in my understanding indeed means convergence only in distribution sense, or converge to some unknown random variable \bar{X} with certain distribution. And this distribution is the “limiting distribution”.

    Here s is fixed and perhaps by strong Markov property, I can assume that behavior of \lim_{t\rightarrow \infty, t>s} X_t is independent of X_s.

    By the way, are you familiar with the Skorokhod embedding problem? A survey is at: http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdfview_1&handle=euclid.ps/1104335302

    Maybe you will be interested in writing a blog on that. The problem is now I get addicted to reading your blog for self study of probability theory rather than reading thick textbooks 🙂

    1. Zhenyu (or should I call you Rocky?): Just time for a quick comment. I don’t think it is important that X is used for both the measure change and is the process to which the measure change is applied. You can’t change the limiting distribution by an equivalent change of measure (assuming that the limit is independent of Xs for fixed s). However, you can change it by a local Girsanov transform. Suppose that B is a Brownian motion and

      \displaystyle\frac{dQ}{dP}\Big\vert_{\mathcal{F}_t}=X_t=\exp(B_t-t/2)

      This is an exponential Brownian motion tending to zero. Under the transformed Q measure, X_t=\exp(\hat B_t+t/2) for a Q-Brownian motion \hat B, so it diverges to infinity.

      And, yes, I’m familiar with Skorohod embedding, but haven’t studied all the methods of solving it. I’ll think about that, but can’t promise anything now.

      Glad you like the blog! Of course, I wouldn’t want to take you away from the textbooks, but hopefully getting a fresh perspective might help to understand them a bit better.

      1. Hi George,

        You can call me Rocky, because my Chinese PingYin name is hard to pronounce. I have a clear idea about the problem now and thanks for the illustration.

        Keep on writing great illuminating blogs on probability theory~

        Best regards!

        Rocky

  7. Hello George.

    May I ask you which sufficient condition should a (cadlag, adapted) process X verify for there to exist an equivalent probability measure under which X is a local martingale ? Being semi-martingale is necessary, but I have reasons (based on financial mathematics litterature) to think that it is not sufficient.

    Thanks in advance

    1. Actually I answered my own question with Theorem 4. I believe therefore that the No Free Lunch with Vanishing Risk of Delbaen and Schachermayer is only a restatement of \{\int_0^t h\,\mathrm{d}X,\;|h| \le 1\text{ simple}\} being bounded in probability.

      1. Yes, I think you’re right. I don’t have access to their papers right now but, from what I remember, Delbaen and Schachermayer define two kinds of no-arbitrage condition. No Free Lunch with Vanishing Risk is equivalent to what you state, and is equivalent to being a semimartingale. No Free Lunch with Bounded Risk is the stronger condition, and is equivalent to the existence of an equivalent local martingale measure, for continuous processes(if I remember these terms correctly).

  8. Hello,

    Is there a version of Girsanov’s theorem that can be applied to a stable Levy process, or more generally, a process with infinite expectation?

    My apologies if I’ve missed a discussion elsewhere in this blog.

  9. Thanks for this very useful blog. I just have a little question about Lemma 3 in the above post. How do go from the second last to the last line in the set of equalities? i.e. how do you get

    {\mathbb E}\left[(U_{t\wedge(s\vee\tau)}-U_{t\wedge\tau})X_{t\wedge\tau}\vert\mathcal{F}_s\right] =(U_s-U_{s\wedge\tau})X_\tau=M_s

    (sorry I am not sure what the markup is to display mathematics in the post, but the code should be correct [GL: I pasted in the latex from your follow-up comment, and deleted that comment. Hope you don’t mind]).

    1. Well, you have the equality

      (U_{t\wedge(s\vee\tau)}-U_{t\wedge\tau})X_{t\wedge\tau}=(U_s-U_{s\wedge\tau})X_\tau.

      Note: both sides are zero when \tau\ge s so you can restrict to \tau < s. As this is \mathcal{F}_s-measurable, the conditional expectation around the left hand side has no effect.
      Also, as U_s-U_{s\wedge\tau} is zero whenever \tau>s,

      (U_s-U_{s\wedge\tau})X_\tau=(U_s-U_{s\wedge\tau})X_{s\wedge\tau}=(U_s-U^\tau_s)X^\tau_s=M_s.

  10. George, I have gone through all the proofs thoroughly but there are some parts I can’t figure out. I would greatly appreciate if you could take a look and explain these to me as I am self studying stochastic calculus through these notes.

    1. In the paragraph above Lemma 2, if we assume U^{-1} is a uniformly integrable martingale and choose a cadlag version, Martingale convergence theorem tells us that it will converge to E[dP/dQ|F_\infty], but why would the sup_t U_t^{-1} be finite? I think you are referring to an argument in the proof of Theorem 6 where you state that U^{-1} is a cadlag martingale tending to the finite limit U_\infty^{-1}, so is bounded. But why does the convergence to U_\infty^{-1} imply that U_t^{-1} is bounded?

    2. In the final equation in the proof of Lemma 3, I can’t figure out how you get E[(U_{t \wedge (s\vee \tau)} – U_{t\wedge \tau}) X_{t \wedge \tau} |F_s] = (U_s – U_{s\wedge \tau})X_\tau. I can’t think of any properties of the conditional expectation that gives this. Could you explain which property you used here?

    3. In Theorem 4, you state that V=\int U^{-1} d[U,X] is a finite variation process, but why is this? My guess is that the variation of V is \int U^{-1} |d[U,X]| so if this is finite, then V has finite variation. But why is \int U^{-1} |d[U,X]| finite?

    4. In the first paragraph of the proof of Theorem 6, I cannot see how you directly apply Lemma 7 on V. So A in Lemma 7 would be |dV| here and B is d[X]. But Lemma 7 requires both to be continuous whereas Theorem 6 is just stated in the cadlag case. The proof of Lemma 7 requires continuity explicitly, so I cannot see how to modify Lemma 7 to apply in this case.

    5. In the proof of Theorem 6, you define M = U_^{-1}dU but then wrote [M,X] = \int U^{-1} d[U,X]. Shouldn’t this be \int U_^{-1} d[U,X]? But then this needs to be equal to V which is \int U^{-1} d[U,X]. So why are they equal when U is just cadlag?

    6. In the proof of Lemma 9, I am not sure how we are able to extend the identity for e(aM)_t to t=\infty, which is used for the identity E[e(aM)_\infty] = E[e(M)_\infty^{a^2} e^{a(1-a)M_\infty}] in the proof That is, you have proven that e(aM) is uniformly integrable, so e(aM)_\infty exists and we have the identity e(aM)_t = e(M)_t^{a^2} e^{a(1-a)M_t}, but how do we ensure that if we take t to infinity then this identity still holds? I am not sure about this because there are two terms on the right hand side that involve M_t and I do not know that either term has a convergent limit to infinity.

    7. In the next part of the proof of Lemma 9, you show that E[e(M)_\infty] \ge 1, but why is it not necessary to show that E[e(M)_\infty] \le 1? I cannot find a previous result that already gives this.

    8. This is similar to question 5, in the proof of Theorem 11, you define M=U_^{-1}dU and this time you wrote [M,B^i]=\int U_^{-1} d[U,B^i] so I guess in 5 the integrand should be the left continuous version of U^{-1}. But then how does the left continuous version of U^{-1} cancel out with U here to give \int U_^{-1} d[U,B^i] = \int \xi_t^i dt?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s