Do Convex and Decreasing Functions Preserve the Semimartingale Property — A Possible Counterexample

f(t,x)
Figure 1: The function f, convex in x and decreasing in t

Here, I attempt to construct a counterexample to the hypotheses of the earlier post, Do convex and decreasing functions preserve the semimartingale property? There, it was asked, for any semimartingale X and function {f\colon{\mathbb R}_+\times{\mathbb R}\rightarrow{\mathbb R}} such that {f(t,x)} is convex in x and right-continuous and decreasing in t, is {f(t,X_t)} necessarily a semimartingale? It was explained how this is equivalent to the hypothesis: for any function {f\colon[0,1]^2\rightarrow{\mathbb R}} such that {f(t,x)} is convex and Lipschitz continuous in x and decreasing in t, does it decompose as {f=g-h} where {g(t,x)} and {h(t,x)} are convex in x and increasing in t. This is the form of the hypothesis which this post will be concerned with, so the example will only involve simple real analysis and no stochastic calculus. I will give some numerical calculations suggesting that the construction below is a counterexample, but do not have any proof of this. So, the hypothesis is still open.

Although the construction given here will be self-contained, it is worth noting that it is connected to the example of a martingale which moves along a deterministic path. If {\{M_t\}_{t\in[0,1]}} is the martingale constructed there, then

\displaystyle  C(t,x)={\mathbb E}[(M_t-x)_+]

defines a function from {[0,1]\times[-1,1]} to {{\mathbb R}} which is convex in x and increasing in t. The question is then whether C can be expressed as the difference of functions which are convex in x and decreasing in t. The example constructed in this post will be the same as C with the time direction reversed, and with a linear function of x added so that it is zero at {x=\pm1}.

I now move on to the construction of the example. For symmetry, I will consider x to take values in {[-1,1]} rather than the unit interval. So, we will construct a function {f\colon R_0\rightarrow{\mathbb R}} where {R_0} is the rectangle

\displaystyle  R_0=[0,1]\times[-1,1],

such that {f(t,x)} is convex in x and decreasing in t. We start by defining f for {t=0,1},

\displaystyle  f(t,x) = \begin{cases} 0,&t=0,\\ \lvert x\rvert - 1,&t=1. \end{cases} (1)

Next, extend to times {t_i=i/4} ({i=0,1,2,3,4}) by successively linearly interpolating {f(t_i,x)} across intervals {[a_i,b_i]},

\displaystyle  f(t_{i-1},x)=\begin{cases} \frac{(b_i-x)f(t_i,a_i)+(x-a_i)f(t_i,b_i)}{b_i-a_i},&x\in[a_i,b_i],\\ f(t_i,x),&x\not\in[a_i,b_i]. \end{cases}

Specifically, using the following intervals

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle [a_1,b_1] &= [-1,1],\smallskip\\ \displaystyle [a_2,b_2] &= [-1,0],\smallskip\\ \displaystyle [a_3,b_3] &= [0,1],\smallskip\\ \displaystyle [a_4,b_4] &= [-1/2,1/2], \end{array}

gives the plots {x\mapsto f(t_i,x)} as in the first two iterations in Figure 2.

We now interpolate f between the times {t_i}. For {x\not\in[a_i,b_i]} we have {f(t_{i-1},x)=f(t_i,x)} so, by monotonicity, f must be constant in t across the interval {[t_{i-1},t_i]}. On the other hand, for {x\in[a_i,b_i]}, we interpolate f as a scaled copy of itself on the whole of {R_0}, plus a linear term in x in order to line up at times {t_{i-1}} and {t_i}. So, f will be self-similar on the rectangles {R_i=[t_{i-1},t_i]\times[a_i,b_i]}. To express this self-similarity, define functions

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle\varphi_i\colon[a_i,b_i]\rightarrow{\mathbb R},\smallskip\\ &\displaystyle \varphi_i(x)= \frac{(b_i-x)f(t_i,a_i)+(x-a_i)f(t_i,b_i)}{b_i-a_i}. \end{array}

We also let {h_i} be the change in {-f(t,(a_i+b_i)/2)} across the interval {[t_{i-1},t_i]}. To be explicit,

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rlrl} \displaystyle\varphi_1(x) &\displaystyle= 0,&\displaystyle h_1&\displaystyle=1/2,\smallskip\\ \displaystyle\varphi_2(x) &\displaystyle= -(x+1)/2,&\displaystyle h_2&\displaystyle=1/4,\smallskip\\ \displaystyle\varphi_3(x) &\displaystyle= (x-1)/2,&\displaystyle h_3&\displaystyle=1/4,\smallskip\\ \displaystyle\varphi_4(x) &\displaystyle= -1/2,&\displaystyle h_4&\displaystyle=1/2. \end{array}

Then, f is defined on {t_{i-1} \le t < t_i} by,

\displaystyle  f(t,x)=\begin{cases} h_if\left(4(t-t_{i-1}),\frac{2x-a_i-b_i}{b_i-a_i}\right)+\varphi_i(x),&x\in[a_i,b_i],\\ f(t_i,x),&x\not\in[a_i,b_i]. \end{cases}

This construction of f can be done iteratively. Let {\mathcal{D}} denote the space of functions {f\colon R_0\rightarrow{\mathbb R}} such that {f(t,x)} is convex in x and continuous and decreasing in t, and satisfying (1). Then define {F\colon\mathcal{D}\rightarrow\mathcal{D}} by {\tilde f=F(f)} where, for {t_{i-1}\le t < t_i},

\displaystyle  \tilde f(t,x)=\begin{cases} h_i f\left(4(t-t_{i-1}),\frac{2x-a_i-b_i}{b_i-a_i}\right)+\varphi_i(x),&x\in[a_i,b_i],\\ \tilde f(t_i,x),&x\not\in[a_i,b_i]. \end{cases}

It can be seen that, with respect to the supremum norm, F is Lipschitz continuous on {\mathcal{D}} with Lipschitz constant 1/2. So, by the Banach fixed point theorem, there is a unique fixed point {f=F(f)}. The fixed point can be constructed iteratively. First choose any {f_0\in\mathcal{D}} and, then, inductively define {f_{n+1}=F(f_n)}. By the Banach fixed point theorem this will converge uniformly to the required f. Furthermore, for each fixed n, the values of {f_m} on the time slices {t=i/4^n} ({i=0,1,\ldots,4^n}) are fixed over all {m\ge n}. So, {f_n} equals {f} at these time slices. Computing successive iterations generates f at these sequences of time slices, as in Figure 2. Figure 1 shows f computed in this way using 5 iterations.

f(t,x)
Figure 2: Iteratively computing f

Computing the decomposition

With the function f constructed as above, we ask if it is possible to decompose

\displaystyle  f=g-h (2)

for functions {g(t,x)} and {h(t,x)} convex in x and increasing in t. The hypotheses of the post Do convex and decreasing functions preserve the semimartingale property? state that this is possible. However, some numerical calculations which I will present here suggest that f is a counterexample, and decomposition (2) does not exist. I do not have any proof of this though, so the hypotheses are still open.

The method described in the earlier post can be used to construct the decomposition (2), if it exists. The idea is to construct {h(t,x)} to be convex in x such that {f(t,x)+h(t,x)} is increasing in t and, then, (2) would follow by taking {g=f+h}. This is done by approximating h along partitions {0=s_0 < s_1 <\cdots < s_m=1} of the unit interval. Using {{\rm CH}(u(x))} to denote the convex hull of a function u, the approximation {\tilde h} is defined inductively at times {s_m,s_{m-1},\ldots} by

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle\tilde h(s_m,x)&\displaystyle=0,\smallskip\\ \displaystyle\tilde h(s_{k-1},x)&\displaystyle={\rm CH}\left(h(s_k,x)+f(s_k,x)-f(s_{k-1},x)\right). \end{array}

As the function f was constructed on the n‘th iteration at times {i/4^n} ({i=0,1,\ldots,4^n}), it is convenient to also compute the approximation to h along these partitions. Also, as {f(t,x)} is piecewise linear in x at these times, the same holds for h. This makes it relatively simple to compute the approximation to h for each successive iteration of f on a computer. The resulting time-slices, {x\mapsto\tilde h(t,x)} are shown in Figure 3.

h(t,x)
Figure 3: Iteratively computing h

It can be seen that the successive approximations to h are decreasing, initially having a minimum of -1 for the first two iterations, but a minimum of around -1.2 at the fourth iteration. The plot of h calculated at iteration 5 is given in Figure 4, showing that the minimum is slightly less than -1.3.

h(t,x)
Figure 4: The function h, computed with 5 iterations

Now, by Lemma 2 of the previous post, one of the following statements holds.

  1. Decomposition (2) exists, and the successive approximations for h converge to a finite limit.
  2. Decomposition (2) does not exist, and the successive approximations for {h(0,x)} diverge to {-\infty}.

In an attempt to determine which of these cases is true, I computed h for a few iterations and looked at the successive {L^\infty} and {L^1} norms of h at time {t=0},

\displaystyle  \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle\lVert h\rVert_\infty&\displaystyle=\sup_{x\in[-1,1]}\lvert h(0,x)\rvert,\smallskip\\ \displaystyle\lVert h\rVert_1&\displaystyle=\int_{-1}^1\lvert h(0,x)\rvert\,dx. \end{array}

Using convexity it can be shown that the norms are equivalent, {\lVert h\rVert_\infty\le\lVert h\rVert_1\le2\lVert h\rVert_\infty}, although I would expect the {L^1} norm to be a bit better behaved as we compute successive iterations. The calculations for up to iteration 13 are given in the table below, and it can be seen that the norm is slowly but steadily increasing.

Iteration {\lVert h\rVert_1} {\lVert h\rVert_\infty}
0 1 1
1 1.25 1
2 1.398 1.115
3 1.520 1.183
4 1.635 1.252
5 1.743 1.336
6 1.846 1.414
7 1.950 1.491
8 2.050 1.568
9 2.151 1.645
10 2.249 1.721
11 2.347 1.796
12 2.444 1.870
13 2.541 1.944

The function f constructed in this post is a counterexample to the hypothesis of the previous post if and only if the norm is increasing to infinity. Figure 5 displays the behaviour, and it does look like the norm is increasing roughly linearly in the number of iterations.

norm of h
Figure 5: The norm of h for increasing iterations

The successive differences in the {L^1} norms are monotonically decreasing, but only very slowly. Certainly, they seem to be decreasing much more slowly than {1/n}, in which case the norm would tend to infinity and f would be the counterexample to the hypotheses mentioned above. However, I do not have any proof of this. Also, the complexity of calculations used to obtain the numbers above grow exponentially in the number n of iterations. The number of time points in the partition grows like {4^n}, and the number of x at which f and h are computed grows at rate {2^n}. So, the time taken is of order {8^n} and it quickly becomes infeasible to compute h as the number of iterations becomes large.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s