# Manipulating the Normal Distribution

The normal (or Gaussian) distribution is ubiquitous throughout probability theory for various reasons, including the central limit theorem, the fact that it is realistic for many practical applications, and because it satisfies nice properties making it amenable to mathematical manipulation. It is, therefore, one of the first continuous distributions that students encounter at school. As such, it is not something that I have spent much time discussing on this blog, which is usually concerned with more advanced topics. However, there are many nice properties and methods that can be performed with normal distributions, greatly simplifying the manipulation of expressions in which it is involved. While it is usually possible to ignore these, and instead just substitute in the density function and manipulate the resulting integrals, that approach can get very messy. So, I will describe some of the basic results and ideas that I use frequently.

Throughout, I assume the existence of an underlying probability space ${(\Omega,\mathcal F,{\mathbb P})}$. Recall that a real-valued random variable X has the standard normal distribution if it has a probability density function given by, $\displaystyle \varphi(x)=\frac1{\sqrt{2\pi}}e^{-\frac{x^2}2}.$

For it to function as a probability density, it is necessary that it integrates to one. While it is not obvious that the normalization factor ${1/\sqrt{2\pi}}$ is the correct value for this to be true, it is the one fact that I state here without proof. Wikipedia does list a couple of proofs, which can be referred to. By symmetry, ${-X}$ and ${X}$ have the same distribution, so that they have the same mean and, therefore, ${{\mathbb E}[X]=0}$.

The derivative of the density function satisfies the useful identity $\displaystyle \varphi^\prime(x)=-x\varphi(x).$ (1)

This allows us to quickly verify that standard normal variables have unit variance, by an application of integration by parts. \displaystyle \begin{aligned} {\mathbb E}[X^2] &=\int x^2\varphi(x)dx\\ &= -\int x\varphi^\prime(x)dx\\ &=\int\varphi(x)dx-[x\varphi(x)]_{-\infty}^\infty=1 \end{aligned}

Another identity satisfied by the normal density function is, $\displaystyle \varphi(x+y)=e^{-xy - \frac{y^2}2}\varphi(x)$ (2)

This enables us to prove the following very useful result. In fact, it is difficult to overstate how helpful this result can be. I make use of it frequently when manipulating expressions involving normal variables, as it significantly simplifies the calculations. It is also easy to remember, and simple to derive if needed.

Theorem 1 Let X be standard normal and ${f\colon{\mathbb R}\rightarrow{\mathbb R}_+}$ be measurable. Then, for all ${\lambda\in{\mathbb R}}$, \displaystyle \begin{aligned} {\mathbb E}[e^{\lambda X}f(X)] &={\mathbb E}[e^{\lambda X}]{\mathbb E}[f(X+\lambda)]\\ &=e^{\frac{\lambda^2}{2}}{\mathbb E}[f(X+\lambda)]. \end{aligned} (3)