Thanks for the comment. I’ll have to reread through my mathoverflow response and get back when I have a bit of time.

]]>I used the result that, for a bounded variation function f, then the variation of is equal to . I did not include a proof, as it is basic (non-stochastic) calculus.

The Hahn decomposition theorem says that where is measurable with absolute value 1. Then, for some measurable function of absolute value 1. The result follows from this…

]]>I think there two things to say here. (1) In many situations we should choose the cadlag version of our process (e.g., stochastic integrals, for optional stopping and sampling). (2) For many processes, including (sub/super)martingales, cadlag processes do indeed exist.

The second of these is a rather strong and very useful mathematical result. In some cases, such as compensated Poisson processes, you already know that it is cadlag so he result is not so helpful. However, the fact still remains that this cadlag version should be used in many applications.

I have a question. For example, Compensated Poisson process is already a Cadlag process, then why do we care about the existence of its Cadlag modifications? Or is it that a Cadlag modification of the original process on a different topological space may have benefits? Can you please explain? ]]>

To be more precise I meant c_i.a_{ij} instead of b_i.a_{ij} in the seting of \tilda[b_i}

]]>Dear George,

I see, well then can you please explain (or give a reference) why the variation of the third term equals the integral of the absolute value with respect to the variation of [X,Y], because I couldn’t find a reference for this result.

]]>Nice to connect. I have a question related to your mathoverlflow response (https://mathoverflow.net/questions/59739/gaussian-processes-sample-paths-and-associated-hilbert-space).

Let X be a generic topological space and k : X x X -> R be a reproducing kernel indexed on X. Let H_k be the RKHS of functions f : X -> R associated to k. Let be the inner product of H_k. Consider a Gaussian process GP(0,k) supported on X.

In the (machine learning) GP community people often refer to Driscoll’s theorem which states that if the RKHS H_k is infinite dimanensional then any sample f ~ GP does not belong to H_k with probability 1. Suppose that H_k has a countable orthonormal basis e = {e_1, e_2, …}.

Is it possible for you expand your argument (based on cylindrical measures) to justify that for any sample f ~ GP and any basis element e_n, the quantity is well defined, despite the fact that f is not in H_k a.s.?

In various papers in the GP community people use arguments that are specific to the particular choice of kernel k. It would be very nice to have a more general justification that doesn’t depend on the particular choice of kernel k.

Many thanks,

Cris

]]>