Continuous Processes with Independent Increments

A stochastic process X is said to have independent increments if {X_t-X_s} is independent of {\{X_u\}_{u\le s}} for all {s\le t}. For example, standard Brownian motion is a continuous process with independent increments. Brownian motion also has stationary increments, meaning that the distribution of {X_{t+s}-X_t} does not depend on t. In fact, as I will show in this post, up to a scaling factor and linear drift term, Brownian motion is the only such process. That is, any continuous real-valued process X with stationary independent increments can be written as

\displaystyle  X_t = X_0 + b t + \sigma B_t (1)

for a Brownian motion B and constants {b,\sigma}. This is not so surprising in light of the central limit theorem. The increment of a process across an interval [s,t] can be viewed as the sum of its increments over a large number of small time intervals partitioning [s,t]. If these terms are independent with relatively small variance, then the central limit theorem does suggest that their sum should be normally distributed. Together with the previous posts on Lévy’s characterization and stochastic time changes, this provides yet more justification for the ubiquitous position of Brownian motion in the theory of continuous-time processes. Consider, for example, stochastic differential equations such as the Langevin equation. The natural requirements for the stochastic driving term in such equations is that they be continuous with stationary independent increments and, therefore, can be written in terms of Brownian motion.

The definition of standard Brownian motion extends naturally to multidimensional processes and general covariance matrices. A standard d-dimensional Brownian motion {B=(B^1,\ldots,B^d)} is a continuous process with stationary independent increments such that {B_t} has the {N(0,tI)} distribution for all {t\ge 0}. That is, {B_t} is joint normal with zero mean and covariance matrix tI. From this definition, {B_t-B_s} has the {N(0,(t-s)I)} distribution independently of {\{B_u\colon u\le s\}} for all {s\le t}. This definition can be further generalized. Given any {b\in{\mathbb R}^d} and positive semidefinite {\Sigma\in{\mathbb R}^{d^2}}, we can consider a d-dimensional process X with continuous paths and stationary independent increments such that {X_t} has the {N(tb,t\Sigma)} distribution for all {t\ge 0}. Here, {b} is the drift of the process and {\Sigma} is the `instantaneous covariance matrix’. Such processes are sometimes referred to as {(b,\Sigma)}-Brownian motions, and all continuous d-dimensional processes starting from zero and with stationary independent increments are of this form.

Theorem 1 Let X be a continuous {{\mathbb R}^d}-valued process with stationary independent increments.

Then, there exist unique {b\in{\mathbb R}^d} and {\Sigma\in{\mathbb R}^{d^2}} such that {X_t-X_0} is a {(b,\Sigma)}-Brownian motion.

This result is a special case of Theorem 2 below. In particular consider the case of continuous real valued processes with stationary independent increments. Then, by this result, there are constants {b,\sigma\in{\mathbb R}} such that {X_t-X_s} is normal with mean {(t-s)b} and variance {(t-s)\sigma^2} for {t\ge s}. As long as X is not a deterministic process, so that {\sigma} is nonzero, {B_t=(X_t-X_0-tb)/\sigma} will be a standard Brownian motion and (1) is satisfied.

It is also possible to define Gaussian processes with independent but non-stationary increments. Consider continuous functions {b\colon{\mathbb R}_+\rightarrow{\mathbb R}^d} and {\Sigma\colon{\mathbb R}_+\rightarrow{\mathbb R}^{d^2}} with {b_0=0}, {\Sigma_0=0}, and such that {\Sigma_t} is increasing in the sense that {\Sigma_t-\Sigma_s} is positive semidefinite for all {s\le t}. Then, there will exist processes X with the independent increments property and such that {X_t-X_s} has the {N(b_t-b_s,\Sigma_t-\Sigma_s)} distribution for all {s\le t}. This exhausts the space of continuous d-dimensional processes with independent incements.

Theorem 2 Let X be a continuous {{\mathbb R}^d}-valued process with the independent increments property.

Then, there exist (unique, continuous) functions {b\colon{\mathbb R}_+\rightarrow{\mathbb R}^d} and {\Sigma\colon{\mathbb R}_+\rightarrow{\mathbb R}^{d^2}} with {b_0=0}, {\Sigma_0=0} such that {X_t-X_s} has the {N(b_t-b_s,\Sigma_t-\Sigma_s)} distribution for all {t>s}.

Note, in particular, that if the increments of X are also stationary, then {b_{t+s}-b_t} and {\Sigma_{t+s}-\Sigma_t} will be independent of t for each fixed {s\ge 0}. It follows that {b_t=t\tilde b} and {\Sigma_t=t\tilde \Sigma} for some {\tilde b\in{\mathbb R}^d} and {\tilde \Sigma\in{\mathbb R}^{d^2}}. Theorem 1 is then a direct consequence of this result.

Before moving on to the proof of Theorem 2, I should point out that there do indeed exist well-defined processes with the required distributions. First, for the stationary increments case, consider {b\in{\mathbb R}^d} and positive semidefinite {\Sigma\in{\mathbb R}^{d^2}}. Letting {\Sigma=QQ^{\rm T}} be the Cholesky decomposition and B be a d-dimensional Brownian motion,

\displaystyle  X = tb + QB

is easily seen to have independent increments {X_t-X_s} with the {N((t-s)b,(t-s)\Sigma)} distribution. More generally, consider continuous functions {b\colon{\mathbb R}_+\rightarrow{\mathbb R}^d} and {\Sigma\colon{\mathbb R}_+\rightarrow{\mathbb R}^{d^2}} with {b_0=0}, {\Sigma_0=0} and such that {\Sigma_t-\Sigma_s} is positive semidefinite for {t\ge s}. If {\Sigma} is absolutely continuous, so that {\Sigma_t=\int_0^t\Sigma^\prime_s\,ds} for some measurable {\Sigma^\prime\colon{\mathbb R}_+\rightarrow{\mathbb R}^{d^2}}, then X can similarly be expressed in terms of a d-dimensional Brownian motion B. As {\Sigma} is increasing, {\Sigma^\prime_t} will be positive semidefinite for almost all t. Letting {\Sigma^\prime_t=Q_tQ^{\rm T}_t} be its Cholesky decomposition,

\displaystyle  X_t = b_t+\int_0^t Q\,dB (2)

satisfies the required properties. The {Q\,dB} term here is to be interpreted as matrix multiplication, {(Q\,dB)^i=\sum_jQ^{ij}\,dB^j}. First,

\displaystyle  \sum_j\int_0^t(Q^{ij}_s)^2\,ds=\int_0^t(\Sigma^\prime_s)^{ii}\,ds=\Sigma^{ii}_t

is finite, so {Q^{ij}} is indeed Bj-integrable. The integral {\int Q\,dB} is also normally distributed with independent increments. If Q is piecewise constant then this follows from the fact that linear combinations of joint normal random variables are normal and the case for general deterministic integrands follows by taking limits. The covariance matrix of {X_t-X_s} can be computed using the Ito isometry,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[(X^i_t-X^i_s)(X^j_t-X^j_s)\right] &\displaystyle={\mathbb E}\left[[X^i,X^j]_t-[X^i,X^j]_s\right]\smallskip\\ &\displaystyle=\sum_{kl}\int_s^tQ^{ik}Q^{jl}\,d[B^k,B^l]\smallskip\\ &\displaystyle=\sum_k\int_s^t Q^{ik}_uQ^{jk}_u\,du=\Sigma^{ij}_t-\Sigma^{ij}_s. \end{array}

This identity made use of the covarations {[B^i,B^j]_s=\delta_{ij}s}. So, the process given by (2) does indeed have stationary independent increments {X_t-X_s} with the {N(b_t-b_s,\Sigma_t-\Sigma_s)} distribution.

Finally, in the general case, a deterministic time change can be applied to force {\Sigma_t} to be absolutely continuous. Define {\gamma\colon{\mathbb R}_+\rightarrow{\mathbb R}_+} by {\gamma(t)={\rm Tr}\Sigma_t+t}. This is continuous and strictly increasing, so has an inverse {\gamma^{-1}}. By positive semidefiniteness

\displaystyle  \left\vert\Sigma^{ij}_{\gamma^{-1}(t)}-\Sigma^{ij}_{\gamma^{-1}(s)}\right\vert \le\sum_k\left(\Sigma^{kk}_{\gamma^{-1}(t)}-\Sigma^{kk}_{\gamma^{-1}(s)}\right)\le t-s

for {t\ge s}. So, {\Sigma_{\gamma^{-1}(t)}} is absolutely continuous and, as described above, it is possible to construct a continuous process {\tilde X} with independent increments from a standard d-dimensional Brownian motion such that {\tilde X_t-\tilde X_s} has the {N(b_{\gamma^{-1}(t)}-b_{\gamma^{-1}(s)},\Sigma_{\gamma^{-1}(t)}-\Sigma_{\gamma^{-1}(s)})} distribution for all {t\ge s}. Then, {X_t=\tilde X_{\gamma(t)}} has the required properties.


Proof of the Theorem

Assume that X is a continuous process with independent increments. If it can be shown that {X_t-X_s} is normal for all {t>s}, then Theorem 2 will follow by setting

\displaystyle  b_t={\mathbb E}[X_t-X_0],\ \Sigma_t={\rm Var}(X_t-X_0).

By continuity of X, these are continuous functions. Furthermore, {\Sigma_t-\Sigma_s} is the covariance matrix of {X_t-X_s} and must be positive semidefinite. In fact it is enough to compute the characteristic function of {X_t-X_0} for all {t\ge 0},

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle\phi\colon{\mathbb R}_+\times{\mathbb R}^d\rightarrow{\mathbb C},\smallskip\\ &\displaystyle\phi(t,a)={\mathbb E}\left[e^{ia\cdot(X_t-X_0)}\right]. \end{array} (3)

The characteristic function of {X_t-X_s} can recovered from {\phi} by applying the independent increments property,

\displaystyle  {\mathbb E}\left[e^{ia\cdot(X_t-X_0)}\right]= {\mathbb E}\left[e^{ia\cdot(X_t-X_s)}\right] {\mathbb E}\left[e^{ia\cdot(X_s-X_0)}\right].

So, the distribution of {X_t-X_s} is determined by

\displaystyle  {\mathbb E}\left[e^{ia\cdot(X_t-X_s)}\right]=\phi(t,a)/\phi(s,a). (4)

Then, the proof of Theorem 2 requires showing that {\phi(t,a)} has the form of the characteristic function of a normal distribution (for each fixed t). That is, it is the exponential of a quadratic in a.

It is possible to prove the theorem directly, by splitting {X_t-X_0} up into small time increments,

\displaystyle  \phi(t,a)=\prod_{k=1}^n{\mathbb E}\left[e^{ia\cdot(X_{t_k}-X_{t_{k-1}})}\right]

for {0=t_0\le\cdots\le t_n=t}. Letting the mesh of this partition go to zero, it is possible to show that only terms up to second order in a contribute to the terms {{\mathbb E}[e^{ia\cdot(X_{t_k}-X_{t_{k-1}})}]} in the limit. This does involve a tricky argument, taking care to correctly bound the higher order terms.

An alternative approach, which I take here, is to use stochastic calculus. Up to a martingale term, Ito’s lemma enables us to write the logarithm of {\phi(t,a)} in terms of X and a quadratic variation term. Then, taking expectations will give the desired quadratic form for {\log\phi(t,a)}.

As always, we will work with respect to a filtered probability space {(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge 0},{\mathbb P})}. In particular, if {\{\mathcal{F}_t\}} is the natural filtration of a process X with the independent increments property then, for {t\ge s}, {X_t-X_s} will be independent of {\mathcal{F}_s}. This will be assumed throughout the remainder of this post.

Let us start by showing that the characteristic functions of X have well-defined and continuous logarithms everywhere which, in particular, requires that {\phi} be everywhere nonzero. On top of the independent increments property, only continuity in probability of X is required. That is, {X_{t_n}\rightarrow X_t} in probability for all sequences {t_n} of times tending to t. This is a much weaker condition than pathwise continuity.

Lemma 3 Let X be a d-dimensional process which is continuous in probability and has independent increments . Then, there exists a unique continuous function {\psi\colon{\mathbb R}_+\times{\mathbb R}^d\rightarrow{\mathbb C}} with {\psi(0,a)=0} and

\displaystyle  {\mathbb E}[e^{ia\cdot(X_t-X_0)}]=e^{\psi(t,a)}. (5)

Furthermore,

\displaystyle  e^{ia\cdot X_t - \psi(t,a)} (6)

is a martingale for each fixed {a\in{\mathbb R}^d}.

Proof: First, the function {\phi} defined by (3) will be continuous. Indeed, if {a_n\rightarrow a} and {t_n\rightarrow t} then {e^{ia_n\cdot(X_{t_n}-X_0)}} tends to {e^{ia\cdot(X_t-X_0)}} in probability and, by bounded convergence, {\phi(t_n,a_n)\rightarrow\phi(t,a)}. We need to take its logarithm, for which it is necessary to show that it is never zero.

Suppose that {\phi(t,a)=0} for some t,a. By continuity, for the given value of a, t can be chosen to be minimal. From the definition, {\phi(0,a)=1} and t is strictly positive. By the independent increments property, for all {s<t}

\displaystyle  \phi(t,a)=\phi(s,a){\mathbb E}\left[e^{ia\cdot(X_t-X_s)}\right].

By minimality of t, {\phi(s,a)} is nonzero. Also, by continuity in probability, the second term on the right hand side tends to 1 as s increases to t, so is also nonzero for large enough s. So, {\phi(t,a)\not=0}.

We have shown that {\phi} is a continuous function from {{\mathbb R}_+\times{\mathbb R}^d} to {{\mathbb C}^\times={\mathbb C}\setminus\{0\}}. It is a standard result from algebraic topology that {{\mathbb C}} is the covering space of {{\mathbb C}^\times} with respect to the map {z\mapsto\exp(z)} and, therefore, {\phi} has a unique lift {\psi\colon{\mathbb R}_+\times{\mathbb R}^d\rightarrow{\mathbb C}} with {\psi(0,0)=0}. That is, {\phi=\exp(\psi)}.

More explicitly, {\psi} can be constructed as follows. For any positive constants {K,T}, the continuity of {\phi} implies that there are times {0=t_0<t_1<\cdots<t_n=t} such that {\vert\phi(t,a)/\phi(t_k,a)-1\vert<1} for all t in the interval {[t_k,t_{k+1}]} and {\Vert a\Vert\le K}. So, {\phi(t,a)/\phi(t_k,a)} lies in the right half-plane of {{\mathbb C}}. As the complex logarithm is uniquely defined as a continuous function on this region, satisfying {\log1=0}, {\psi(t,a)} uniquely extends from {t=t_k} to {t_k<t\le t_{k+1}} by

\displaystyle  \psi(t,a)=\psi(t_k,a)+\log\left(\phi(t,a)/\phi(t_k,a)\right).

So {\psi(t,a)} is uniquely defined over {t\le T} and {\Vert a\Vert\le K} and, by letting T, K increase to infinity, it is uniquely defined on all of {{\mathbb R}_+\times{\mathbb R}^d}.

It only remains to show that (6) is a martingale, which follows from (4) and the independent increments property,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle{\mathbb E}\left[e^{ia\cdot X_t-\psi(t,a)}\mid\mathcal{F}_s\right] &\displaystyle=e^{ia\cdot X_s-\psi(t,a)}{\mathbb E}\left[e^{ia\cdot (X_t-X_s)}\mid\mathcal{F}_s\right]\smallskip\\ &\displaystyle=e^{ia\cdot X_s-\psi(t,a)}\phi(t,a)/\phi(s,a)\smallskip\\ &\displaystyle=e^{ia\cdot X_s-\psi(s,a)}. \end{array}

Next, we would like to write the characteristic function in terms of the increments

With the aid of Ito’s lemma, it is possible to take logarithms of (6). This shows that, up to a deterministic process, X is a semimartingale, and also gives an expression for {\psi(t,a)} up to a martingale term.

Lemma 4 Let X be a continuous d-dimensional process with independent increments and {\psi_t(a)\equiv\psi(t,a)} be as in (5).

Then, there exists a continuous {\tilde b\colon{\mathbb R}_+\rightarrow{\mathbb R}^d} such that {\tilde X_t\equiv X_t-\tilde b_t} is a semimartingale. Furthermore,

\displaystyle  ia\cdot(X_t-X_0)-\psi_t(a)-\frac12[a\cdot\tilde X]_t (7)

is a square integrable martingale, for all {a\in{\mathbb R}^d}.

The proof of this makes use of complex-valued semimartingales, which are complex valued processes whose real and imaginary parts are both semimartingales. It is easily checked that Ito’s lemma holds for complex semimartingales, simply by applying the result to the real and imaginary parts separately.

Proof: Fixing an {a\in{\mathbb R}^d}, set {Y_t=ia\cdot (X_t-X_0)-\psi_t(a)}. Then, by Lemma 3, {U\equiv\exp(Y)} is a martingale and, hence, a semimartingale. Then, by Ito’s lemma, {Y=\log(U)} is a semimartingale. Note that, although the logarithm is not a well-defined twice differentiable function everywhere on {{\mathbb C}^\times}, this true locally (actually, on any half plane), so there is no problem in applying Ito’s lemma here.

We have shown that {ia\cdot X_t-\psi_t(a)} is a semimartingale. Taking imaginary parts, {a\cdot X_t-\Im\psi_t(a)} is a semimartingale. In particular, writing {\tilde b^k_t=\Im\psi_t(e_k)} where {e_k} is the unit vector along the k’th dimension, then {\tilde b_t=(\tilde b^1_t,\ldots,\tilde b^d_t)} is a continuous function from {{\mathbb R}_+} to {{\mathbb R}^d} and {\tilde X_t\equiv X_t-\tilde b_t} is a semimartingale.

Applying Ito’s lemma again,

\displaystyle  U_t=1+\int_0^tU_s\,dY_s + \frac12\int_0^tU_s\,d[Y]_s.

As {\vert U_t\vert=\vert\exp(-\psi_t(a))\vert}, U is uniformly bounded over any finite time interval and, in particular, is a square integrable martingale. Similarly, {U^{-1}} is uniformly bounded on any finite time interval, so

\displaystyle  M\equiv\int U^{-1}\,dU = Y+\frac12[Y] (8)

is also a square integrable martingale.

Now, Y can be written as {ia\cdot (\tilde X-\tilde X_0)-V} for the process

\displaystyle  V_t\equiv\psi_t(a)-ia\cdot \tilde b_t=ia\cdot(\tilde X_t-\tilde X_0)-Y_t,

which is both a semimartingale and a deterministic process. So, the integrals {\int_0^t\xi\,dV} must be bounded over the set of all piecewise-contant and deterministic processes {\vert\xi\vert\le1}. Therefore, V has bounded variation over each bounded time interval. We have shown that {Y=ia\cdot\tilde X} plus an FV process, and recalling that continuous FV processes do not contribute to quadratic variations,

\displaystyle  [Y]=[ia\cdot\tilde X]=-[a\cdot\tilde X].

Substituting this and the definition of Y back into (8) shows that expression (7) is the square integrable martingale M. ⬜

Finally, taking expectations of (7) gives the required form for {\psi(t,a)}, showing that {X_t-X_s} is a joint normal random variable for any {s\le t}, and completing the proof of Theorem 2.

Lemma 5 Let X be a continuous d-dimensional process with independent increments, and {\psi(t,a)} be as in (5). Then, there are functions {b\colon{\mathbb R}_+\rightarrow{\mathbb R}^d} and {\Sigma\colon{\mathbb R}_+\rightarrow{\mathbb R}^{d^2}} such that

\displaystyle  \psi(t,a)=ia\cdot b_t-\frac12 a^{\rm T}\Sigma_ta.

Proof: Taking the imaginary part of (7) shows that {a\cdot(X_t-X_0)-\Im\psi_t(a)} is a martingale. In particular, {X-X_0} is integrable and, taking expectations,

\displaystyle  \Im\psi_t(a)=a\cdot{\mathbb E}[X_t-X_0].

Taking the real part of (7) shows that {\Re\psi_t(a)+\frac12[a\cdot\tilde X]_t} is a martingale. So, {[\tilde X^j,\tilde X^k]} are integrable processes and

\displaystyle  \Re\psi_t(a)=-\frac12{\mathbb E}\left[[a\cdot\tilde X]_t\right]=-\frac12\sum_{jk}a_ja_k{\mathbb E}\left[[\tilde X^j,\tilde X^k]_t\right].

The result follows by taking {b_t={\mathbb E}[X_t-X_0]} and {\Sigma^{jk}_t={\mathbb E}\left[[\tilde X^j,\tilde X^k]_t\right]}. ⬜

12 thoughts on “Continuous Processes with Independent Increments

  1. Let Y and Z two brownian motions and the process X = pY +(1-p^2)^0.5 Z, where p is between -1 and 1. Assuming X is continuous and has marginal distributions N(0,t). Is X a brownian motion?

    Another similar example….if Z is a normal (0,1) the process X(t) = t^0.5 Z is continuous and marginally distributed as a normal N(0,t). But is X a brownian motion?

    I am confused how to prove the independent increments property or how to verify it. Any suggestions?

    1. Hi. You don’t need any advanced results to consider the examples you mention. If Y,Z are independent Brownian motions then X=pY+(1-p^2)^{\frac12}Z will be a Brownian motion. This just uses the fact that a sum of independent normals is normal, so you can calculate the distribution X. If you aren’t assuming that they are independent then it will depend on precisely what you are assuming, and X does not have to be a Brownian motion in general.

      The process X_t=t^{\frac12}Z is not a Brownian motion, as its increments are all proportional to Z, so are not independent.

  2. Hi, is your definition equivalent to the one commonly used:
    For any $n$ and any times $0<s_1<t_1<\ldots < s_n<t_n$, the random variables $\{X_{t_i}-X_{s_i}\}$ are independent?

  3. Hi George, could you explain in a bit more detail how Ito’s lemma applies to log(U) in the proof of Lemma 4?
    So U would be the continuous semimartingale and f is the complex logarithm in the original Ito’s lemma. To apply Ito’s lemma we need U to take values in C – some branch cut.
    But from the formula of exp(Y), I can only see that it must not take the value 0. So how do we ensure that U will not take values in some branch cut of the complex logarithm?

  4. Hi. You asked quite a few questions. I will answer when I have time, but starting on this one:
    As written here, I am using Ito’s formula *locally*. i.e., once you have proved that Y is a semimartingale when stopped at a stopping time tau, you can let tau’ be the first time that Y(tau’)/Y(tau) is imaginary, so we remain in the same half-plane for times tau <= t <= tau'. Hence, the Branch cut can be chosen to miss this half-plane. Apply Ito's formula to the process started at time tau and stopped at tau'.
    There are technical details here, but it is just managing the definitions of stopping times, semimartingales, etc. Nothing advanced is really going on.

    1. If you really wanted to get deep in the maths, and formalize these ideas. You could consider semimartingales on a smooth (or C2) manifold, and lift them to the covering space, which is also a C2 manifold. Then, log is defined, and twice differentiable on the covering space of the complex numbers minus the origin.

    2. Thank you George but I am not familiar with this idea. Could you help me understand a bit more? So we want to show that Y=log(U) is a semimartingale knowing that U is a semimartingale.
      I can’t see how we can prove that Y is a semimartingale when stopped at any stopping time tau.

      I think you mean that U is a semimartingale when stopped at tau? This would be true since U is already a semimartingale and stopping preserves semimartingale. And we can take tau’ as you suggest.
      But if U is a semimartingale, is U_\tau also a semimartingale?
      And how do we exactly define a process that starts at some stopping time tau and stops at tau’?
      Stopping at tau’ can be done by the usual process of U^\tau’. But I don’t know how we can just start it at some stopping time and how that would still preserve the semimartingale property.

      Finally, I guess you mean we can then apply Ito’s formula to the process U starting at tau and stopping at tau’ since it would stay in the same half plane. Then we get that log(U)=Y starting at tau and stopping at tau’ is a semimartingale. But how does this prove that Y is a semimartingale? To be a semimartingale or equivalently, locally a semimartingale, we need some sequence tau_n increasing to infinity and Y^\tau_n to be a semimartingale. But the way we define tau’, we cannot guarantee that we can find a sequence increasing to infinity.

      Sorry to bother you with details but it is my first time seeing this kind of argument and I haven’t been able to get help elsewhere so I would greatly appreciate your help.

    3. Hello, I was going through the proof and need help with one part. It was mentioned that a (continuous) semimartingale that is deterministic has finite variation (the paragraph after equation 8). I think this seems intuitively correct as the unique martingale component will result in “large local variation with non 0 quadratic variation” but I cant seem to prove it rigirously. Will be glad if you could provide a reference. Thank you!

  5. There are various ways of showing that a process is a semimartingale. Here, I am using the fact that a C2 function of a semimartingale is again a semimartingale, plus additional basic facts (like a semimartingale stopped at a stopping time is a semimartingale).
    Anyway, the idea here is define stopping times tau_n inductively by tau_0=0 and tau_{n+1} >= tau_n being the first time at which U_t/U_{tau_n} is imaginary.
    We can show that Y=log(U) is a ‘semimartingale on each interval [tau_n,tau_{n+1}]’, since we can choose a branch of log defined and C2 for U_t over this interval.
    – But how to interpret the statement that something is a semiartingale on a stochastic interval?
    there’s various ways. We can stop it at time tau_{n+1}, and ‘start’ it at tau_n. Either look at the process U_{tau_n+t} with time shifted so that it starts at tau_n. Or, look at Y_t-Y_{tau_n}=log(U_t/U_{tau_n}), but set this to 0 before time tau_n.
    Either way works. And, by adding together (or otherwise combining) the processes over each interval, you should be able to show that the stopped processes Y^{tau_n} are semimartingales, implying that Y is a semimartingale.

  6. Hello,

    I am not sure if I follow the part where it says a deterministic semimartingale has finite variation. (In the paragraph after paragraph 8). Will be grateful if you could provide a reference. Thanks!

Leave a comment