In this post, I will give a statement and proof of the Bichteler-Dellacherie theorem describing the space of semimartingales. A semimartingale, as defined in these notes, is a cadlag adapted stochastic process X such that the stochastic integral is well-defined for all bounded predictable integrands
. More precisely, an integral should exist which agrees with the explicit expression for elementary integrands, and satisfies bounded convergence in the following sense. If
is a uniformly bounded sequence of predictable processes tending to a limit
, then
in probability as n goes to infinity. If such an integral exists, then it is uniquely defined up to zero probability sets.
An immediate consequence of bounded convergence is that the set of integrals for a fixed time t and bounded elementary integrands
is bounded in probability. That is,
(1) |
is bounded in probability, for each . For cadlag adapted processes, it was shown in a previous post that this is both a necessary and sufficient condition to be a semimartingale. Some authors use the property that (1) is bounded in probability as the definition of semimartingales (e.g., Protter, Stochastic Calculus and Differential Equations). The existence of the stochastic integral for arbitrary predictable integrands does not follow particularly easily from this definition, at least, not without using results on extensions of vector valued measures. On the other hand, if you are content to restrict to integrands which are left-continuous with right limits, the integral can be constructed very efficiently and, furthermore, such integrands are sufficient for many uses (integration by parts, Ito’s formula, a large class of stochastic differential equations, etc).
It was previously shown in these notes that, if X can be decomposed as for a local martingale M and FV process V then it is possible to construct the stochastic integral, so X is a semimartingale. The importance of the Bichteler-Dellacherie theorem is that it tells us that a process is a semimartingale if and only if it is the sum of a local martingale and an FV process. In fact this was the historical definition used of semimartingales, and is still probably the most common definition.
Throughout, we work with respect to a complete filtered probability space , and all processes are real-valued.
Theorem 1 (Bichteler-Dellacherie) For a cadlag adapted process X, the following are equivalent.
- X is a semimartingale.
- For each
, the set given by (1) is bounded in probability.
- X is the sum of a local martingale and an FV process.
Furthermore, the local martingale term in 3 can be taken to be locally bounded.
So, the three alternative definitions of a semimartingale as a `good integrator’, a process for which (1) is bounded in probability, and as the sum of a local martingale and an FV process all agree with each other. The seemingly weakest of these three conditions is that (1) is bounded in probability, and that X is the sum of a local martingale and an FV process does seem like quite a strong condition. So, the main impact of the Bichteler-Dellacherie theorem is in the equivalence of these two conditions.
It is often useful to choose the martingale term in the semimartingale decomposition to be locally square integrable so that, for example, the Ito isometry can be used. This is always possible, and the proof given below naturally generates locally square integrable martingale terms. In fact, we have the stronger property that the martingale term is locally bounded. However, showing that is rather tricky and I will leave the proof until a later post.
As noted above, we have already shown that the first two conditions in the statement of the Bichteler-Dellacherie theorem are equivalent, and that the second and third both imply that X is a semimartingale. So, here, it is only necessary here to show that all semimartingales decompose into the sum of a local martingale and FV process. First, however, I will state and prove a useful consequence of the theorem. In many textbooks, stochastic integration with respect to a semimartingale X is constructed directly from the decomposition into local martingale and FV terms. This usually requires the integrand
to be V-integrable in the Lebesgue-Stieltjes sense, so that
is finite. The integral with respect to M is often constructed using the Ito isometry, and requires
to be locally integrable. Alternatively, the Burkholder-Davis-Gundy inequality can be used, only requiring the weaker condition that
is locally integrable. Then, a process
is X-integrable if it is both M-integrable and V-integrable in the way just described. Note that there is a problem with this approach. The definition of X-integrability depends on the decomposition used, and this is not unique. Therefore, a process
is said to be X-integrable if the exists any such decomposition with respect to which it is both M-integrable and V-integrable as just described. In these notes, on the other hand, the set of X-integrable processes was defined to be the largest possible class of predictable integrands with respect to which the dominated convergence theorem holds (i.e., they are good dominators). It is not easy to see that these two definitions lead to the same concept of X-integrability but, with the help of the Bichteler-Dellacherie theorem, it can be shown that this is indeed the case.
Theorem 2 If X is a semimartingale and
is a predictable process, then the following are equivalent.
is X-integrable.
for some local martingale M and FV process V such that
for each t (almost surely), and
is locally integrable.
Then,
is both M and V-integrable and
(2) gives a decomposition of
into the sum of a local martingale and an FV process.
Furthermore, it is possible to choose the decomposition in 2 so that M,
and
are locally bounded.
Proof: Suppose that condition 2 holds. Then, as previously shown, is V-integrable and the stochastic integral agrees with the Lebesgue-Stieltjes integral. In particular,
is an FV process. Also,
is M-integrable and
is a local martingale. So,
is X-integrable and (2) gives a decomposition of
into the sum of a local martingale and an FV process.
Conversely, suppose that 1 holds and set , which is X-integrable. Then
is a semimartingale and, by Theorem 1, decomposes as the sum of a local bounded martingale N and FV process W. As
then
is a local martingale with locally bounded jumps
. So, M is locally bounded. Similarly,
is an FV process. Associativity of the integral shows that
and, using the fact that
is bounded by 1,
As N was taken to be locally bounded, the jumps are locally bounded, so
and
are locally bounded. ⬜
Existence of the Decomposition
As mentioned above, the first two statements of 1 have already been shown to be equivalent earlier in these notes, and the third condition implies that X is a semimartingale. It remains to show that every semimartingale decomposes as the sum of a local martingale and an FV process, which I will do now. The idea is simple enough, and only really involves the existence of quadratic variations along with the integration by parts formula and some properties of martingales. To simplify things a bit, I will assume that the filtration is right-continuous in this section. However, all of the stated results do also hold in the non-right-continuous case, as I will show later.
The method of constructing the decomposition will be to orthogonally project the semimartingale into the space of cadlag martingales. We can start by defining a vector space as the set of semimartingales whose squared initial value and quadratic variation at infinity are integrable,
. Then, the quadratic covariation enables us to define a symmetric bilinear form
As , the covariation
is integrable for all
, so this is well-defined. In fact, it defines a degenerate inner product. It only fails to be a true inner product because there exist non-constant semimartingales with zero quadratic variation (e.g., continuous FV processes). This does not matter here though.
Next, consider the space of cadlag and bounded martingales M, which I will denote by
. By martingale convergence, we can define a map
taking a martingale M to its limit at infinity
. Furthermore, Ito’s isometry shows that
is a subspace of
and
So, Ito’s isometry gives us an isometry from to
. Conversely, given any square-integrable random variable
the martingale
has a cadlag version (this requires right-continuity of the filtration). If U is
-measurable then
. So,
is congruent to
. In particular, this means that
is a complete subspace of
and, hence, there is an orthogonal projection from
to
. Writing
for its orthogonal complement in
, we have arrived at the following.
Lemma 3 Let X be a semimartingale such that
is integrable. Then, there exists a unique decomposition
for
and
.
We can describe the elements in a bit more detail. The constant process
is a martingale, so
and A almost surely starts from zero. Next, if
is a bounded elementary predictable process and
then
is a martingale. Commuting the integral with the covariation,
So, is a martingale. Next,
is stable under stopping at any stopping time
,
Localizing Lemma 3 gives the following. Recall that and
denote the local martingales and local square integrable martingales respectively.
Lemma 4 Let X be a locally square integrable semimartingale. Then, it decomposes as
where
and A is such that
for all
.
Proof: Without loss of generality, we can suppose that . As X is locally square integrable, the same is true of its jumps
. Then,
is locally integrable, so
is also locally integrable. Therefore, there exists a sequence stopping times
increasing to infinity such that
are integrable.
Apply Lemma 3 to obtain decompositions for
and
. For any
we have
. By uniqueness of the decomposition, this shows that
, so
for all
. This means that we can define a process A by
for all n and the local square integrable martingale M by
. Then,
.
Now, let . So, there is a sequence of stopping times
increasing to infinity such that
are
bounded martingales. So,
is a martingale and, therefore,
is a local martingale. ⬜
We now move on to the main part of the proof of Theorem 1 and show that all semimartingales which are (locally) in the orthogonal complement are actually FV processes. The argument is rather tricky, but the basic idea is to compute the variation of A on an interval
as the limit of the variation computed along a sequence of partitions. Then, on each partition, express the variation as a stochastic integral plus a martingale term. The stochastic integral term can be bounded using the fact that the set
is bounded in probability. This is a simple consequence of bounded convergence (see the proof of Lemma 3 of The Stochastic Integral). The martingale term can be bounded separately by taking expectations and, putting these together, gives a bound (in probability) on the variation of A.
Lemma 5 Let A be a locally integrable semimartingale such that
for all cadlag bounded martingales M. Then, A is an FV process.
Proof: First, by localization, we can assume that is integrable. Then, we need to show that A almost surely has finite variation on each given time interval
. The variation along a partition
of the interval is defined as
If we were to take a sequence of partitions with mesh tending to zero and perform this calculation, then V would tend to the variation of A on
. It just needs to be shown that, if we were to do this, then the sequence of random variables obtained for V is bounded in probability and, hence, can only converge to an almost surely finite limit. So, the problem reduces to bounding V in probability independently of the chosen partition. That is, fixing
, we need to show that there is a positive real number K, not depending on the times
, such that
.
As the filtration is assumed to be right-continuous, it is possible to define cadlag martingales by
Note that this martingale is nonnegative, bounded by 1, and is constant over . Integration by parts gives the following expression for the absolute values of the increments of A across the intervals of the partition,
Summing this over i gives the variation along the partition as
(3) |
where is a predictable process bounded by 1 and N is a local martingale given by,
The first term inside the parentheses defining N is a local martingale, as stochastic integration preserves the local martingale property, and the quadratic covariation terms are local martingales by the hypothesis of the lemma. For any time t in the size of the jump of N is
. As M is bounded by 1 and A is bounded by
, this gives
.
Looking at (3), the first term on the right hand side is a stochastic integral with integrand bounded by 1, so is bounded in probability independently of (so, independently of the partition). To show that A has finite variation, it is enough to find an upper bound (in probability) for the martingale term
. Choosing a positive real L, let
be the first time t at which
or
. By the debut theorem, this is a stopping time. If
is the smallest positive integer with
then applying integration by parts and summing over i in the same way as above gives
Noting that the first term on the left hand side is nonnegative and the second is bounded by gives a lower bound for N,
Each of the terms on the right hand side are bounded in probability, independently of the choice of partition and the stopping time . So, choosing L large enough, we can ensure that
. Again, L can be chosen independently of the partition. By construction,
whenever
, so
Next, as is bounded by
, the stopped process
is bounded below by the integrable random variable
. Consequently,
is a supermartingale so that
and we can bound the expectation of its positive part,
Now, Markov’s inequality can be used to bound the probability that N is greater than any positive value K,
Taking gives
as required, and this choice of K is independent of the partition. ⬜
Applying this result to Lemma 4, we see that every locally square integrable semimartingale decomposes as the sum of a local martingale and an FV process and, furthermore, that the terms in the decomposition can also be taken to be locally square integrable. All that remains is to extend this to all semimartingales, which is not difficult to do. Recalling that a cadlag adapted process X is locally bounded if and only if its jumps are locally bounded, simply subtract out all the jumps of X exceeding some fixed value. As stated in Lemma 6 below, this means that any semimartingale X decomposes as the sum of a locally bounded (and, in particular, locally square integrable) semimartingale Y and an FV process. So, applying the decomposition from Lemma 4 to Y gives a decomposition into a local martingale and an FV process.
Lemma 6 Let X be a cadlag adapted process. Then, it decomposes as
where V is an FV process and Y satisfies
. In particular, if X is a semimartingale then so is Y.
Proof: Take . As X is cadlag it can only have finitely many jumps of size greater than 1 in each bounded interval. So, V is an FV process and is a semimartingale. Therefore,
has jumps
, so is locally bounded, and is a semimartigale whenever X is. ⬜
This completes the proof of the Bichteler-Dellacherie theorem given here, at least for right-continuous filtrations. There is, still, one final point to add. Theorem 1 contains the additional claim that the martingale term in the decomposition can be taken to be locally bounded. The construction above already gives us a locally square integrable martingale term, which is strong enough for most applications. However, as mentioned above, the proof of the stronger locally bounded property will be left until a later post. In fact, it will follow from properties of predictable processes that the martingale term constructed above is already locally bounded. Actually, as well as being an FV process, any A satisfying the conditions of Lemma 5 is also predictable which, as will be shown, implies that it is locally bounded. So, Lemma 4 decomposes a locally bounded semimartingale into the sum of a locally bounded FV process and a local martingale which, being the difference of two locally bounded processes, must itself be locally bounded. Applying this to the semimartingale Y from Lemma 6 automatically results in a locally bounded martingale term.
Non-Right-Continuous Filtrations
The Bichteler-Dellacherie theorem has been proven above in the case where the filtration is right-continuous, so that is equal to
for all times t. Being one of the usual conditions, right-continuity is often assumed to hold in stochastic process theory. However, the conclusion of the theorem does still hold without imposing this condition, and I will show why this is the case here. While it is possible to drop this condition from the start, assuming right-continuity does simplify the argument, which is why it was assumed it above.
Passing between the original filtration and the right-continuous one
can be done without too much trouble. For example, consider the space of
-predictable processes. By definition, this is generated by the left-continuous and
-adapted processes but, using left-continuity, any such process
is automatically
-adapted at all positive times t. Therefore,
is
-predictable. This means that if X is a semimartingale with respect to the original filtration, then the stochastic integral
is well-defined for all
-predictable processes
. By bounded convergence, it is easily checked that this also agrees with the explicit expression for all
-elementary integrands. So, by the definition used in these notes, X is also an
-semimartingale. Conversely, if X is an
-semimartingale and is adapted under the original filtration then, as the stochastic integral agrees by definition with the expression for elementary integrands, it is clearly also a semimartingale under the original filtration.
Lemma 7 An adapted process X is an
-semimartingale if and only if it is an
-semimartingale.
Now, suppose that X is a semimartingale under the original filtration such that is integrable. I’ll write
for the space of
-bounded and cadlag
-martingales. Then, applying Lemma 3 under
gives a decomposition
where
and A is an
-semimartingale satisfying
and
for all
. In fact, it can be shown that A is
-adapted by applying Lemma 8 below under the filtration
. So,
,
and Lemma 3 generalises to the non-right-continuous case. Alternatively, Doob’s inequality can be used to show that any limit of cadlag martingales in the
sense also has a cadlag modification. Then,
is seen to be complete, and the proof of Lemma 3 follows in the same way as above, without making any reference to right-continuity of the filtration.
Lemma 8 Suppose that A is a semimartingale such that
is integrable and
for all uniformly bounded cadlag martingales M. Then,
is
-measurable for all positive times t.
Proof: Suppose that U is a bounded -measurable random variable. Then,
is a martingale. So,
is zero and, rearranging this gives
. So,
and, hence,
are
-measurable. ⬜
The only remaining place where right-continuity was required above was in the proof of Lemma 5. In fact, if A satisfies the conditions of that Lemma, then it also satisfies the condition under the right-continuous filtration . Then, the conclusion that A has finite variation over all bounded time intervals will hold (and this does not refer to any filtration). To pass to the right-continuous filtration, the following simple Lemma allows us to convert
-martingales to
-martingales.
Lemma 9 Suppose that M is a cadlag and square-integrable
-martingale. Then, there exists a countable subset
such that
for all
. Furthermore,
is an
-martingale.
Proof: As it is cadlag, there can only be finitely many times at which on a bounded time interval [0,T] for each
. So, there are only finitely many such times at which
. Then, letting
decrease to zero and T increase to infinity, there is only a countable set of times S at which
.
Now, suppose that S is as above, and set . This is a square-integrable martingale with respect to
. To show that it is also an
-martingale, it is enough to show that
(almost surely) for each fixed time t, so that it is adapted. However,
. If
then this is zero and, if
then
so, again,
. ⬜
Now, suppose that A satisfies the conditions of Lemma 5 and M is a cadlag bounded -martingale. By localization, it can be assumed that
is integrable. Then, let
be a sequence of distinct times including the set S of all t for which
. Lemma 9 above shows that
is a martingale under the original filtration. The quadratic covariation
can be decomposed as
The term is a local martingale under the original filtration, by assumption on the properties of A, so it is also an
-local martingale. As
is
-measurable (Lemma 8), the terms inside the summation are also
-martingales. So, by dominated convergence of the sum, we see that
is an
-local martingale, and the conditions of Lemma 8 also hold with respect to this right-continuous filtration.
Notes
I will give a few further comments regarding the proof the Bichteler-Dellacherie theorem given here. It is instructive, I think, to compare this method with the more common approach found in many textbooks. There, the approach is to reduce the problem to the case of quasimartingales. These are cadlag adapted and integrable processes such that the set
is bounded. In that case, a generalization of the Doob-Meyer decomposition to quasimartingales says that X decomposes as for a martingale M and integrable variation process A. The method of reducing to quasimartingales is to consider the set
defined in (1) above, for any fixed time t. This is bounded in probability, and, by the Hahn-Banach theorem, it is possible to show that for any convex
which is bounded in probability, there exists a uniformly bounded and strictly positive random variable Z such that
is bounded. Normalising Z so that
, this can be used to show that X is a quasimartingale under the equivalent measure
. Equivalently, defining the martingale
, the process
is a quasimartingale, so decomposes as the sum of a martingale M and an FV process by Rao’s decomposition. Then, Ito’s formula can be used to express X as the local martingale
plus an FV process.
On the other hand, the approach used here is via Lemma 3 above, which decomposes X uniquely into a martingale term plus an orthogonal component. This is quite direct, and the main work involved is in showing that the orthogonal component does have finite variation (Lemma 5). One advantage of this approach is that it avoids the use of the Hahn-Banach theorem and the associated arbitrary change of measure (or auxiliary martingale Z). It also results in a decomposition which is essentially unique. In fact, the decomposition in Lemma 4 for locally square integrable semimartingales is unique, and the general case gives a unique decomposition after all large jumps have been subtracted out. This uniqueness is expressed in the canonical decomposition of special semimartingales, which I will cover in a later post.
It is also interesting to ask how much stochastic calculus is really required in the proof above. The existence of cadlag versions of martingales was required, as was the existence of quadratic variations, the integration by parts formula, and the Ito isometry. No other big results were needed. I did also make use of the equivalence between the first two statements of Theorem 1 right from the start of the post though. That is, if the set defined by (1) is bounded in probability, then it is possible to define the stochastic integral for bounded predictable
. This is rather tricky to prove, and was assumed for convenience, since it has already been proved in an earlier post. However, this is not strictly necessary. Just assuming that the second statement of the theorem holds then it is easy to define the integral for integrands which are adapted, left-continuous, and right-continuous in probability and satisfying a rather weak uniform convergence property. That is, if
uniformly then
in probability. This is enough to prove results such as integration by parts and the Ito isometry for such integrands, and is enough to be able to apply the proof given here to decompose X as the sum of a local martingale and an FV process. See Protter, Stochastic Calculus and Differential Equations, for a development of stochastic integration in this way. However, even in Protter, the construction of the stochastic integral for arbitrary predictable integrands does depend on the standard proof of the Bichteler-Dellacherie theorem as just outlined.
Finally, consider the proof that condition 3 of the Bichteler-Dellacherie theorem as stated above implies that X is a semimartingale, so that the stochastic integral exists. Here, I referred to the proof given earlier in these notes which applies to all local martingales. However, the proof of condition 3 above gives a locally square integrable martingale term in the decomposition of X. In this case, the standard construction via the Ito isometry is quite efficient. By localising, and ignoring the finite variation term (which is clearly a semimartingale), it can be assumed that X is a square-integrable martingale. Applying the Ito isometry for elementary integrands gives
for elementary . However, the right hand side is well-defined for all bounded
(by Lebesgue-Stieltjes integration). For bounded predictable integrands, the stochastic integral can be defined as the unique linear extension such that this identity still holds.
Hi,
Great post, but I am still under the process of digesting it (which can take quite a long time).
I have some remarks and questions but I will try not to overwhelm you with them.
So first things first.
In the second paragraph of this post, you mention that the set of integrals
integrands bounded by 1 is bounded in probability.
for a fixed time t and
I am not particularly acquainted with the notion of “boundedness in probability”.
If I have it right this means that for a family
of real r.v. indexed by a set I,
this family is bounded in probability, if and only if :
I think that this is only the notion of tightness applied to random variable instead of probability measures, is that right ?
Second, still in this paragraph you start with the following statement :
“An immediate consequence of bounded convergence is the the set of integrals
for a fixed time t and bounded integrands is bounded in probability”
But the link that you point to provide a proof only for elementary predictible bounded integrands and your claim is for the more general case of bounded (by 1) predictible integrands.
Is that a typo or is it true ?
Anyway the next statement is true and proved in the link.
Regards
Hi.
Your interpretation of boundedness in probability is correct (except that the
should be after the ‘s.t.’, but I assume that’s just a typo) and, yes, this is equivalent to tightness of the measures. I was thinking about writing a post on convergence and boundedness in probability (which I might still do when I get round to it). Actually, if you understand the notion of convergence in probability, then you also get boundedness in probability for free. The space of real-valued random variables,
under convergence in probability is a topological vector space (TVS). That is, convergence in probability is a vector topology, so addition and multiplication by scalars is continuous. Then, there is the notion of bounded sets in any TVS. A subset S of a TVS is bounded if it is absorbed by any neighbourhood N of the origin. This means that
for some real
. In the case of
you can check that this coincides with the definition you just correctly gave.
You also get some other simple facts which are true in general TVS’s. A set S is bounded if and only if for any sequence
and real
then
. Also a linear map
between a normed space V and a TVS W is continuous if and only if it is bounded (i.e., the image of T on the unit ball is bounded). In this case you can let V be the set of elementary predictable processes under uniform convergence, W be
and T be the stochastic integral up to a fixed time. So, in the statement of the Bichteler-Dellacherie theorem, you can equivalently say that the integral is continuous (under the respective topologies). I tend to prefer the statement in terms of boundedness in probability rather than continuity, but that’s just a matter of taste.
Also, for your second statement. Yes, it is true that you get boundedness in probability for the more general case of bounded (by 1) predictable integrands. If you look at the proof in the post I linked, it works just as well if you replace “elementary predictable” by “predictable”. It was just in the statement that I only mentioned elementary integrands, since that was the main point at the time. I think I’ll go back and edit that post with the more general statement.
Thanks for your questions, I’m sure there’ll be more when you get further through the post.
Btw, I edited the latex in your post (Using LaTeX in comments).
Regards
I made a minor update to this post to reflect your questions.
Hi,
I have a question about whcih makes a link between theorem 2 of this post and your post about “Failure of Pathwise Integration for FV Processes”.
Let’s take
and
as any of the examples defined in the “Failure of Pathwise Integration for FV Processes” post so that
, but nonetheless
-integrable.
As
is a FV process, it is a semimartingale, and as
is
-Integrable then from theorem 2, this implies that there exists :
–
a local(ly bounded) martingale s.t.
is locally integrable
–
a FV process s.t.
almost surely.
Moreover we have :
with the first term in the righthand equality being a locally bounded martingale and the second term a FV process.
So here is the question, do you think it is possible to explicitely express
and
in those counterexamples ?
Second is there a link with Lévy-Itô decomposition of Lévy processes (if they are of infinite activity type) ?
Best regards
Yes you can, easily! The decomposition is a bit simple to be interesting though. In the examples in the post you refer to, X and the integral with respect to X are both local martingales with bounded jumps, so they are locally bounded martingales. You can simply take M = X and A = 0.
For your second point, regarding the Lévy-Itô decomposition. There is a relation, but my more recent post on special semimartingales is probably more closely related. For any semimartingale X you can write Xt = Yt + Σs≤t1{|ΔXs| ≤ 1}ΔXs.
Then, Y is a semimartingale with jumps bounded by 1. So it is locally bounded (so, locally integrable) and we can apply the unique decomposition Y = M + A where M is a local martingale and A is a predictable FV process starting from 0. Also, A will have jumps bounded by 1 and both M and A are locally bounded. Writing X = M + (A + Σs≤t1{|ΔXs| ≤ 1}ΔXs) splits X up into a locally bounded martingale and an FV process. You can also, further spit M into the sum of a continuous local martingale and a `purely discontinuous’ local martingale.
In the case where X is a Lévy process then these components of X are all independent Lévy processes. Furthermore, At = at for a constant a, and this gives the Lévy-Itô decomposition. Going in the opposite direction, each term in the Lévy-Itô decomposition is either a locally bounded martingale or is an FV process, so they can be combined to give the decomposition in this post.
These kinds of manipulations and decompositions are a bit simpler with even more of the general theory, which I will expand on a bit in upcoming posts.
Ok that ‘s clear thank’s a lot for this answer
By the way I think there a tiny typo in the paragraph preceding theorem 2.
You wrote :
is said to be X-integrable if the exists any such decomposition with respect to which it is both M-integrable and V-integrable as just described.”
“Therefore, a process
Did you mean ?
is said to be X-integrable if for any such decomposition, it is both M-integrable and V-integrable as just described.”
“Therefore, a process
Best regards
The point is, there does not generally exist a decomposition which works for all integrands. The decomposition will depend on
to some extent. So, no, I don’t think that the alternative you suggest is really correct. There is a very small typo though, “the” should be “there”. Other than that I think it is correct as it is. Maybe it could be clearer, I’ll have to try reading through the post again.
Ok now that you say it, it seems more reasonable claimed this way thank’s again
Best regards
Hi George,
This morning there is another “simple” proof of Bichteler-Dellacherie Theorem on arxiv here :
Click to access 1201.1996v1.pdf
Best Regards
Thanks for pointing that out. The approach seems to be to show that you can reduce to the quasimartingale case by a kind of localization — as opposed to the ‘standard’ approach, which reduces to the quasimartingale case by a change of measure.