The martingale representation theorem states that any martingale adapted with respect to a Brownian motion can be expressed as a stochastic integral with respect to the same Brownian motion.
Theorem 1 Let B be a standard Brownian motion defined on a probability space
and
be its natural filtration.
Then, every
–local martingale M can be written as
for a predictable, B-integrable, process
.
As stochastic integration preserves the local martingale property for continuous processes, this result characterizes the space of all local martingales starting from 0 defined with respect to the filtration generated by a Brownian motion as being precisely the set of stochastic integrals with respect to that Brownian motion. Equivalently, Brownian motion has the predictable representation property. This result is often used in mathematical finance as the statement that the Black-Scholes model is complete. That is, any contingent claim can be exactly replicated by trading in the underlying stock. This does involve some rather large and somewhat unrealistic assumptions on the behaviour of financial markets and ability to trade continuously without incurring additional costs. However, in this post, I will be concerned only with the mathematical statement and proof of the representation theorem.
In more generality, the martingale representation theorem can be stated for a d-dimensional Brownian motion as follows.
Theorem 2 Let
be a d-dimensional Brownian motion defined on the filtered probability space
, and suppose that
is the natural filtration generated by B and
.
Then, every
-local martingale M can be expressed as
(1) for predictable processes
satisfying
, almost surely, for each
.
As Brownian motion has quadratic variation
, the condition that
is finite is equivalent to stating that
is Bi-integrable. Taking the quadratic covariation of (1) with
and using the identity
for
gives
By Lebesgue’s theorem, this shows that is given by the derivative
for almost every t.
Proving the representation theorem involves showing that, for all integrable random variables Z, there exist predictable and -integrable processes
satisfying
(2) |
(almost surely), for each time . As long as this can be shown to hold for a suitably large set of random variables, the representation theorem will follow.
I now give an argument showing that (2) is indeed satisfied for certain B-measurable random variables. Equation (2) can be verified directly for random variables of the form
(3) |
for deterministic processes satisfying
(for now, i denotes the square root of -1) and
-measurable variable U. In particular, for a sequence of times
and
, taking
gives
As are normal with zero mean and variance
independently of
, the expectation can be written out as
This is continuous and, setting , Ito’s lemma gives
so, taking shows that (2) is satisfied for all random variables of the form (3).
Alternatively, it can be shown that (2) holds for all random variables of the form , for a bounded measurable function
and time T. For times
,
is, by definition, joint normal with covariance matrix
independently of
. This allows us to write out
for all
as a function of
,
From this, it can be seen that f is twice continuously differentiable, so Ito’s formula can be applied,
(4) |
Here the subscripts t and i represent the partial derivatives of with respect to time and
respectively. The final term on the right-hand-side of (4) is a continuous FV process, all the other terms being local martingales. As continuous FV local martingales are constant, this allows us to break (4) up into the following two equations,
The first of these is the Kolmogorov backward equation, and the second shows that does indeed satisfy the representation property (2) with integrands
. It should be clear that this argument is quite general, and applies to any continuous martingale which is Markov with twice continuously differentiable transition densities. In fact, it is not difficult to extend the argument to variables of the form
for times ,
-measurable variable U and bounded measurable function
. However, I will not do this here, as Z of the form (3) is already enough to prove the martingale representation theorem for Brownian motion.
From here, it is possible to extend (2) to all bounded measurable random variables, by taking limits and applying the monotone class theorem. Ito’s isometry can be used to show that -convergence of the random variables implies convergence of the integrands
. However, in the proof given here, I will instead make use of the following more general result, applying to all continuous local martingales. Given a continuous local martingale N, every other local martingale can be expressed as an integral with respect to N plus an `orthogonal’ term. This is a useful result in its own right. In finance, it has applications to models of incomplete markets in which it is not possible to exactly replicate all contingent claims by trading in the underlying stock. Instead, it is only possible to replicate them up to an orthogonal (i.e., unhedgeable) term.
Lemma 3 Let N be a continuous local martingale and M be any other local martingale. Then, there exists an N-integrable process
and local martingale L such that
and [L,N]=0.
Proof: The lemma reduces to the statement that there is an N-integrable process satisfying
. Then, defining
gives
as required.
First, the FV process [M,N] is absolutely continuous with respect to [N]. That is, if is a bounded process satisfying
, then
(almost surely). This follows from the Kunita-Watanabe inequality,
This implies that there exists a predictable process with
. It needs to be shown that
is N-integrable. That is,
is almost-surely finite for each time t. This is, again, a consequence of the Kunita-Watanabe inequality. For any constant K,
Cancelling the term from the right hand side, squaring, and taking the limit
gives
which is finite, as required. ⬜
This has the following d-dimensional generalization.
Lemma 4 Let
be continuous local martingales with covariations
for
. Then, every local martingale M can be expressed as
for
-integrable processes
and local martingale L satisfying
.
Proof: By Lemma 3, there are -integrable processes
such that
satisfy
. Setting
and using the condition that
for
gives
⬜
The martingale representation theorem follows immediately from applying the following to Lemma 4, so that the orthogonal term L is constant.
Lemma 5 Let
be a d-dimensional Brownian motion defined on the filtered probability space
, and suppose that
is the natural filtration generated by B and
.
Then, any local martingale L satisfying
for each i is constant.
Proof: By localization, it is enough to consider the case where L is a cadlag martingale and, replacing L by if necessary, we can assume that
. Consider a bounded random variable Z satisfying (2). Then,
is a bounded martingale with covariation
. So, LM is a local martingale. As M is bounded and L is a martingale, LM will be of class (DL), so is a proper martingale. Then,
For any fixed time t, consider the set S of bounded random variables Z satisfying . As shown above, this includes all Z of the form (3), which generate the sigma-algebra given by B. Also, dominated convergence implies that S is closed under taking limits of uniformly bounded sequences of variables. So, by the monotone class theorem, S contains all bounded random variables measurable with respect to
and B. In particular, if the filtration is generated by
and B then
is in S, giving,
and (almost surely), as required. ⬜
Hi can I have the source of this Martingale Representation Theorem ?
Sorry, what are you asking for? You want a source file, as in a pdf/tex file? Or do you mean a reference for the theorem?
Thank you for your reply, I think a reference for the theorem will be help.
It’s a standard result which most textbooks should cover. Checking the ones I have to hand, they all mention it. Revuz & Yor, Continuous Martingales and Brownian Motion (Chapter V, Prop 3.2); Protter, Stochastic Integration and Differential Equations (Chapter IV, Theorem 43); Kallenberg, Foundations of Modern Probability (Chapter 18, Theorem 18.10); Rogers & Williams Diffusions, Markov Processes and Martingales (Chapter IV, Theorem 36.1).
That is really helpful! Thank you very much!
I wonder if \xi can be taken such that it has left limits a.s.? Are there any examples where this is not possible?
Hi. No, that’s not possible in general. Any square integrable function
can appear as the integrand, and it is uniquely defined almost everywhere. Eg, you could have
.
Hello George,
as you know, we can’t claim that a Poisson process is continuous a.s.
We know that it is only continuous in probability. But all the books just mention it. I was wondering if the proof that the process cannot be continuous a.s. is difficult.
Thank you.
Alex.
Poisson processes are integer valued and are not constant. That is enough to conclude that they can’t be almost surely continuous.
this holds for all local martingales, not just continuous ones?
Yes. But — if {Ft}t≥0 is a filtration generated by a Brownian motion then all Ft-local martingales are continuous anyway. This fact does have further consequences beyond martingale representation. For example, it implies that all Ft-stopping times are predictable, and all right-continuous adapted processes are predictable.
Does (2) hold for all integrable variables? The proofs just apply “certain” random variables and variables with the representation F(B_T). What are the minimal conditions for this to be true?
Given an integrable Z, we can define a uniformly integrable martingale
which we can apply Theorem to to show (2) holds.
Note that this argument is not circular – we never needed to show (2) holds for all integrable random variables to prove Theorem 2, just for the ones of the special exponential form given by (3).
Sorry, typo.
which we can apply Theorem 2 to to show (2) holds. *
Hi. Thanks for the post. One of the key points in the Martingale Representation Theorem seems to be that the Filtration to which a given martingale is adapted is the natural filtration of the Brownian Motion on a measurable space. In several applications, this particular condition may be hard to verify. For example, the compensated Poisson process is a martingale, but not adapted to the Brownian Filtration. [This somehow limits the general applicability of this theorem]. Is there some constructive procedure to determine whether the given filtration of the martingale, is in fact the natural Brownian Filtration ?
Why is
in Lemma 3?
This was under the condition that
. Integrate
with respect to this, or use the fact that it implies
almost everywhere under the
measure.
Hi! There is any sufficient condition in Lemma 3 concerning
and
that implies
? Which means: there exists a “proper” version of Radon-Nikodym theorem for martingales? Intuition would suggest that
should be (at least) “absolutely continuous” with respect to
. If we think in a vector measures language this would mean that
predictable such that
, then
, but maybe this is not sufficient.
Thanks,
Alex
I’m not sure what the sufficient condition would be. Consider the case where M and N are independent standard Brownian motions. Then, taking quadratic variations, if
we have
and, hence
. However, L is not 0.
Are there any martingale representation theorem for manifold valued martingales?
Good morning George,
Thank you for your beautiful blog! I would be interested in the following statement you made: “It should be clear that this argument is quite general, and applies to any continuous martingale which is Markov with twice continuously differentiable transition densities”. Do you have a reference for this, some book/paper where this is proven? Thanks
Hi Hurgudurgu,
I have seen similar statements before in various books & papers. Nothing that I can recall precisely right now, but I think that it is quite well known that sufficiently “nice” strong Markov martingales satisfy the representation property. If I remember any references, I will come back here. Probably googling the relevant terms would get something, although probably not the exact conditions stated here.
Thanks
George
Hi George, I have several questions from the post.
1. In the arguments below Theorem 2, when you set M_t = E[Z|F_t], how do you apply Ito’s lemma to get M_t = M_0 + i\sum_j \int Ma^j dB^j? I can’t figure out what is the C^2 function f and the semimartingale X in the Ito’s lemma that you use here.
2. If you take \xi^i = iMa^j, how do we know that \int_0^t (\xi_s^i)^2 ds<\infty? I am concerned with the M_s involved here.
3. In the alternative description, could you refer me to any result that gives the transition density for f(t,x) here?
4. In the paragraph above Lemma 3, you write "From here, it is possible to extend (2) to all bounded measurable random variables, by taking limits and applying the monotone class theorem." How can we extend the result (2) for random variables of the form F(B_T) to all bounded measurable random variables? The limiting argument would require us to approximate any bounded measurable random variable by random variables of the form F(B_T) but I can't see how this is possible.
5. In the first paragraph of proof of Lemma 6, why is LM of class (D), that is {(LM)_\tau, \tau<\infty is a stopping time} is uniformly integrable since M is bounded and L is a martingale?
6. Finally, why does Z of the form (3) generate the sigma algebra given by B?
Thank you for the wonderful notes.
ok, quite a few questions here, I’ll start with the first 3:
1. There’s a few ways you can decompose it. One method is to take X = sum_j int alpha^j dB^j and Y = 1/2 sum_j int_t^infty (alpha^j_s)^2 ds, and Z = f(U,X,Y)=U exp(iX-Y). Or, you can break it down further so that X^j=int alpha^j dB^j and Y^j=1/2 int_t^infty (alpha^j_s)^2 ds and f(U,X^1,…,X^d,Y^1,…,Y^d)=Uexp(i sum_j X^j – sum_j Y^j).
2. a^j is square-integrable by assumption, and Z is bounded conditional on F_0 (you can take U to be bounded if you like, or condition on F_0), so M is bounded.
3. This is the definition of standard Brownian motion. B_T – B_s is normal with mean 0 and covariance matrix (T-s)I, which has the density given.
4. I am assuming that representation of the form (2) applies to all random variables Z of the form Uexp(ia_1.B_{T_1}+…+ia_n.B_{T_n}), for deterministic a_k and (bounded) F_0-measurable U. These do generate the entire sigma-algebra of sets measurable wrt B, so Monotone class theorem can be applied https://almostsuremath.com/2019/10/27/the-functional-monotone-class-theorem/#fmct_thm1
5. You mean lemma 5, and you are correct. I should have said class (DL) i.e., of class (D) over each finite time period. I fixed. Thanks!
Thank you very much George everything in this post is clear now. Thanks for answering all of them.