Monotone likelihood ratio

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found.

A monotonic likelihood ratio in distributions f(x) and g(x)

The ratio of the density functions above is increasing in the parameter x, so f(x)/g(x) satisfies the monotone likelihood ratio property.

In statistics, the monotone likelihood ratio property is a property of the ratio of two probability density functions (PDFs). Formally, distributions ƒ(x) and g(x) bear the property if

\text{for every }x_1 > x_0, \quad \frac{f(x_1)}{g(x_1)} \geq \frac{f(x_0)}{g(x_0)}

that is, if the ratio is nondecreasing in the argument x.

If the functions are first-differentiable, the property may sometimes be stated

\frac{\partial}{\partial x} \left( \frac{f(x)}{g(x)} \right) \geq 0

For two distributions that satisfy the definition with respect to some argument x, we say they "have the MLRP in x." For a family of distributions that all satisfy the definition with respect to some statistic T(X), we say they "have the MLR in T(X)."

Intuition

The MLRP is used to represent a data-generating process that enjoys a straightforward relationship between the magnitude of some observed variable and the distribution it draws from. If f(x) satisfies the MLRP with respect to g(x), the higher the observed value x, the more likely it was drawn from distribution f rather than g. As usual for monotonic relationships, the likelihood ratio's monotonicity comes in handy in statistics, particularly when using maximum-likelihood estimation. Also, distribution families with MLR have a number of well-behaved stochastic properties, such as first-order stochastic dominance and increasing hazard ratios. Unfortunately, as is also usual, the strength of this assumption comes at the price of realism. Many processes in the world do not exhibit a monotonic correspondence between input and output.

Example: Working hard or slacking off

Suppose you are working on a project, and you can either work hard or slack off. Call your choice of effort e and the quality of the resulting project q. If the MLRP holds for the distribution of q conditional on your effort e, the higher the quality the more likely you worked hard. Conversely, the lower the quality the more likely you slacked off.

  1. Choose effort e \in \{H,L\} where H means high, L means low
  2. Observe q drawn from f(q\mid e). By Bayes' law with a uniform prior,
    Pr[e=H\mid q]=\frac{f(q\mid H)}{f(q\mid H)+f(q\mid L)}
  3. Suppose f(q\mid e) satisfies the MLRP. Rearranging, the probability the worker worked hard is
\frac{1}{1+f(q\mid L)/f(q\mid H)}
which, thanks to the MLRP, is monotonically increasing in q. Hence if some employer is doing a "performance review" he can infer his employee's behavior from the merits of his work.

Families of distributions satisfying MLR

Statistical models often assume that data are generated by a distribution from some family of distributions and seek to determine that distribution. This task is simplified if the family has the Monotone Likelihood Ratio Property (MLRP).

A family of density functions \{ f_\theta (x)\}_{\theta\in \Theta} indexed by a parameter \theta taking values in an ordered set \Theta is said to have a monotone likelihood ratio (MLR) in the statistic T(X) if for any \theta_1 < \theta_2,

\frac{f_{\theta_2}(X=x_1,x_2,x_3,\dots)}{f_{\theta_1}(X=x_1,x_2,x_3,\dots)}   is a non-decreasing function of T(X).

Then we say the family of distributions "has MLR in T(X)".

List of families

Family T(X)  in which f_\theta(X) has the MLR
Exponential[\lambda] \sum x_i observations
Binomial[n,p] \sum x_i observations
Poisson[\lambda] \sum x_i observations
Normal[\mu,\sigma] if \sigma known, \sum x_i observations

Hypothesis testing

If the family of random variables has the MLRP in T(X), a uniformly most powerful test can easily be determined for the hypotheses H_0 : \theta \le \theta_0 versus H_1 : \theta > \theta_0.

Example: Effort and output

Example: Let e be an input into a stochastic technology --- worker's effort, for instance --- and y its output, the likelihood of which is described by a probability density function f(y;e). Then the monotone likelihood ratio property (MLRP) of the family f is expressed as follows: for any e_1,e_2, the fact that e_2 > e_1 implies that the ratio f(y;e_2)/f(y;e_1) is increasing in y.

Relation to other statistical properties

If a family of distributions f_\theta(x) has the monotone likelihood ratio property in T(X),

  1. the family has monotone decreasing hazard rates in \theta (but not necessarily in T(X))
  2. the family exhibits the first-order (and hence second-order) stochastic dominance in x, and the best Bayesian update of \theta is increasing in T(X).

But not conversely: neither monotone hazard rates nor stochastic dominance imply the MLRP.

Proofs

Let distribution family f_\theta satisfy MLR in x, so that for \theta_1>\theta_0 and x_1>x_0:

\frac{f_{\theta_1}(x_1)}{f_{\theta_0}(x_1)} \geq \frac{f_{\theta_1}(x_0)}{f_{\theta_0}(x_0)},

or equivalently:

f_{\theta_1}(x_1) f_{\theta_0}(x_0) \geq f_{\theta_1}(x_0) f_{\theta_0}(x_1). \,

Integrating this epression twice, we obtain:

1. To x_1 with respect to x_0
\int_{\min_x \in X}^{x_1} f_{\theta_1}(x_1) f_{\theta_0}(x_0) \, dx_0
 \geq \int_{\min_x \in X}^{x_1} f_{\theta_1}(x_0) f_{\theta_0}(x_1) \, dx_0

integrate and rearrange to obtain

 \frac{f_{\theta_1}}{f_{\theta_0}}(x) \geq \frac{F_{\theta_1}}{F_{\theta_0}}(x)
2. From x_0 with respect to x_1
\int_{x_0}^{\max_x \in X} f_{\theta_1}(x_1) f_{\theta_0}(x_0) \, dx_1
 \geq \int_{x_0}^{\max_x \in X} f_{\theta_1}(x_0) f_{\theta_0}(x_1) \, dx_1

integrate and rearrange to obtain

 \frac{1-F_{\theta_1}(x)}{1-F_{\theta_0}(x)} \geq \frac{f_{\theta_1}}{f_{\theta_0}}(x)

First-order stochastic dominance

Combine the two inequalities above to get first-order dominance:

F_{\theta_1}(x) \leq F_{\theta_0}(x) \ \forall x

Monotone hazard rate

Use only the second inequality above to get a monotone hazard rate:

\frac{f_{\theta_1}(x)}{1-F_{\theta_1}(x)} \leq \frac{f_{\theta_0}(x)}{1-F_{\theta_0}(x)} \ \forall x

Example

Uses

Economics

The MLR is an important condition on the type distribution of agents in mechanism design. Most solutions to mechanism design models assume a type distribution to satisfy the MLR to take advantage of a common solution method.