# Probability density function

In mathematics, a probability density function (pdf) is a function that represents a probability distribution in terms of integrals.

Formally, a probability distribution has density f, if f is a non-negative Lebesgue-integrable function ${\displaystyle \mathbb {R} \to \mathbb {R} }$ such that the probability of the interval [a, b] is given by

${\displaystyle \int _{a}^{b}f(x)\,dx}$

for any two numbers a and b. This implies that the total integral of f must be 1. Conversely, any non-negative Lebesgue-integrable function with total integral 1 is the probability density of a suitably defined probability distribution.

Intuitively, if a probability distribution has density f(x), then the infinitesimal interval [x, x + dx] has probability f(x) dx.

Informally, a probability density function can be seen as a "smoothed out" version of a histogram: if one empirically samples enough values of a continuous random variable, producing a histogram depicting relative frequencies of output ranges, then this histogram will resemble the random variable's probability density, assuming that the output ranges are sufficiently narrow.

## Simplified explanation

A probability density function is any function f(x) that describes the probability density in terms of the input variable x in a manner described below.

• f(x) is greater than or equal to zero for all values of x
• The total area under the graph is 1:
${\displaystyle \int _{-\infty }^{\infty }\,f(x)\,dx=1.}$

The actual probability can then be calculated by taking the integral of the function f(x) by the integration interval of the input variable x.

For example: the probability of the variable X being within the interval [4.3,7.8] would be

${\displaystyle \Pr(4.3\leq X\leq 7.8)=\int _{4.3}^{7.8}f(x)\,dx.}$

## Further details

For example, the continuous uniform distribution on the interval [0,1] has probability density f(x) = 1 for 0 ≤ x ≤ 1 and f(x) = 0 elsewhere. The standard normal distribution has probability density

${\displaystyle f(x)={e^{-{x^{2}/2}} \over {\sqrt {2\pi }}}}$

If a random variable X is given and its distribution admits a probability density function f(x), then the expected value of X (if it exists) can be calculated as

${\displaystyle \operatorname {E} (X)=\int _{-\infty }^{\infty }x\,f(x)\,dx}$

Not every probability distribution has a density function: the distributions of discrete random variables do not; nor does the Cantor distribution, even though it has no discrete component, i.e., does not assign positive probability to any individual point.

A distribution has a density function if and only if its cumulative distribution function F(x) is absolutely continuous. In this case: F is almost everywhere differentiable, and its derivative can be used as probability density:

${\displaystyle {\frac {d}{dx}}F(x)=f(x)}$

If a probability distribution admits a density, then the probability of every one-point set {a} is zero.

It is a common mistake to think of f(x) as the probability of {x}, but this is incorrect; in fact, f(x) will often be bigger than 1 - consider a random variable that is uniformly distributed between 0 and ½. Loosely, one may think of f(xdx as the probability that a random variable whose probability density function if f is in the interval from x to x + dx, where dx is an infinitely small increment.

Two probability densities f and g represent the same probability distribution precisely if they differ only on a set of Lebesgue measure zero.

In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:

If dt is an infinitely small number, the probability that ${\displaystyle X}$ is included within the interval (tt + dt) is equal to ${\displaystyle f(t)\,dt}$, or:

${\displaystyle \Pr(t

## Link between discrete and continuous distributions

The definition of a probability density function at the start of this page makes it possible to describe the variable associated with a continuous distribution using a set of binary discrete variables associated with the intervals [ab] (for example, a variable being worth 1 if X is in [ab], and 0 if not).

It is also possible to represent certain discrete random variables using a density of probability, via the Dirac delta function. For example, let us consider a binary discrete random variable taking −1 or 1 for values, with probability ½ each.

The density of probability associated with this variable is:

${\displaystyle f(t)={\frac {1}{2}}(\delta (t+1)+\delta (t-1)).}$

More generally, if a discrete variable can take 'n' different values among real numbers, then the associated probability density function is:

${\displaystyle f(t)=\sum _{i=1}^{n}P_{i}\,\delta (t-x_{i}),}$

where ${\displaystyle x_{1},\ldots ,x_{n}}$ are the discrete values accessible to the variable and ${\displaystyle P_{1},\ldots ,P_{n}}$ are the probabilities associated with these values.

This expression allows for determining statistical characteristics of such a discrete variable (such as its mean, its variance and its kurtosis), starting from the formulas given for a continuous distribution.

In physics, this description is also useful in order to characterize mathematically the initial configuration of a Brownian movement.

## Probability function associated to multiple variables

For continuous random variables ${\displaystyle X_{1},\ldots ,X_{n}}$, it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the n variables, such that, for any domain D in the n-dimensional space of the values of the variables ${\displaystyle X_{1},\ldots ,X_{n}}$, the probability that a realisation of the set variables falls inside the domain D is

${\displaystyle \Pr \left(X_{1},\ldots ,X_{N}\in D\right)=\int _{D}f_{X_{1},\dots ,X_{n}}(x_{1},\ldots ,x_{N})\,dx_{1}\cdots dx_{N}.}$

For i=1, 2, …,n, let ${\displaystyle f_{X_{i}}(x_{i})}$ be the probability density function associated to variable ${\displaystyle X_{i}}$ alone. This probability density can be deduced from the probability densities associated of the random variables ${\displaystyle X_{1},\ldots ,X_{n}}$ by integrating on all values of the n − 1 other variables:

${\displaystyle f_{X_{i}}(x_{i})=\int f(x_{1},\ldots ,x_{n})\,dx_{1}\cdots dx_{i-1}\,dx_{i+1}\cdots dx_{n}}$

### Independence

Continuous random variables ${\displaystyle X_{1},\ldots ,X_{n}}$ are all independent from each other if and only if

${\displaystyle f_{X_{1},\dots ,X_{n}}(x_{1},\ldots ,x_{N})=f_{X_{1}}(x_{1})\cdots f_{X_{n}}(x_{n}).}$

### Corollary

If the joint probability density function of a vector of n random variables can be factored into a product of n functions of one variable

${\displaystyle f_{X_{1},\dots ,X_{n}}(x_{1},\ldots ,x_{n})=f_{1}(x_{1})\cdots f_{n}(x_{n}),}$

then the n variables in the set are all independent from each other, and the marginal probability density function of each of them is given by

${\displaystyle f_{X_{i}}(x_{i})={\frac {f_{i}(x_{i})}{\int f_{i}(x)\,dx}}.}$

### Example

This elementary example illustrates the above definition of multidimensional probability density functions in the simple case of a function of a set of two variables. Let us call ${\displaystyle {\vec {R}}}$ a 2-dimensional random vector of coordinates ${\displaystyle (X,Y)}$: the probability to obtain ${\displaystyle {\vec {R}}}$ in the quarter plane of positive x and y is

${\displaystyle \Pr \left(X>0,Y>0\right)=\int _{0}^{\infty }\int _{0}^{\infty }f_{X,Y}(x,y)\,dx\,dy.}$

## Sums of independent random variables

The probability density function of the sum of two independent random variables U and V, each of which has a probability density function is the convolution of their separate density functions:

${\displaystyle f_{U+V}(x)=\int _{-\infty }^{\infty }f_{U}(y)f_{V}(x-y)\,dy.}$

## Dependent variables

If the probability density function of an independent random variable x is given as f(x), it is possible (but often not necessary; see below) to calculate the probability density function of some variable y which depends on x. This is also called a "change of variable" and is in practice used to generate a random variable of arbitrary shape "f" using a known (for instance uniform) random number generator. If the dependence is y = g(x) and the function g is monotonic, then the resulting density function is

${\displaystyle \left|{\frac {1}{g'(g^{-1}(y))}}\right|\cdot f(g^{-1}(y)).}$

Here g−1 denotes the inverse function and g' denotes the derivative.

For functions which are not monotonic the probability density function for y is

${\displaystyle \sum _{k}^{n(y)}\left|{\frac {1}{g'(g_{k}^{-1}(y))}}\right|\cdot f(g_{k}^{-1}(y))}$

where n(y) is the number of solutions in x for the equation g(x) = y, and ${\displaystyle g_{k}^{-1}(y)}$ are these solutions.

It is tempting to think that in order to find the expected value E(g(X)) one must first find the probability density of g(X). However, rather than computing

${\displaystyle E(g(X))=\int _{-\infty }^{\infty }xf_{g(X)}(x)\,dx,}$

${\displaystyle E(g(X))=\int _{-\infty }^{\infty }g(x)f_{X}(x)\,dx.}$

The values of the two integrals are the same in all cases in which both X and g(X) actually have probability density functions. It is not necessary that g be a one-to-one function. In some cases the latter integral is computed much more easily than the former.

### Multiple variables

The above formulas can be generalized to variables (which we will again call y) depending on more than one other variables. f(x0, x1, ..., xm-1) shall denote the probability density function of the variables y depends on, and the dependence shall be y = g(x0, x1, ..., xm-1). Then, the resulting density function is

${\displaystyle \int _{y=g(x_{0},x_{1},\dots ,x_{m-1})}{\frac {f(x_{0},x_{1},\dots ,x_{m-1})}{\sqrt {\sum _{j=0}^{j

where the integral is over the entire (m-1)-dimensional solution of the subscripted equation and the symbolic dV must be replaced by a parametrization of this solution for a particular calculation; the variables x0, x1, ..., xm-1 are then of course functions of this parametrization.

## Finding moments and variance

In particular, the nth moment E(Xn) of the probability distribution of a random variable X is given by

${\displaystyle E(X^{n})=\int _{-\infty }^{\infty }x^{n}f_{X}(x)\,dx,}$

and the variance is

${\displaystyle \operatorname {var} (X)=E((X-E(X))^{2}=\int _{-\infty }^{\infty }(x-E(X))^{2}f_{X}(x)\,dx}$

or, expanding, gives:

${\displaystyle \operatorname {var} (X)=E(X^{2})-[E(X)]^{2}}$.