See: Description
Interface | Description |
---|---|
ClosedFormMaximumLikelihoodInterface |
A public interface for probability distributions that provide a method for a
closed form solution of the maximum likelihood function
|
PDFGradientInterface |
A public interface for probability distributions that provide a method
to calculate the gradient of the density function
|
PDFHessianInterface |
A public interface for probability distributions that provide methods
to calculate the gradient and hessian of the density function
|
Class | Description |
---|---|
BetaPD |
The beta probability distribution.
|
ContinuousUniformPD |
The continuous uniform probability distribution.
|
ExponentialPD |
The exponential probability distribution.
|
ExtremeValuePD |
The extreme value/Gumbel probability distribution.
|
GammaPD |
The gamma probability distribution.
|
GeneralizedGaussianPD |
The generalized Gaussian probability distribution.
|
GeometricPD |
The geometric probability distribution.
|
LogisticPD |
The logistic probability distribution.
|
LogNormalPD |
The log-normal probability distribution.
|
MaximumLikelihoodEstimation |
Maximum likelihood parameter estimation.
|
NormalPD |
The normal (Gaussian) probability distribution.
|
ParetoPD |
The Pareto probability distribution.
|
PoissonPD |
The Poisson probability distribution.
|
ProbabilityDistribution |
The ProbabilityDistribution abstract class defines members and methods common
to univariate probability distributions and useful in parameter estimation.
|
RayleighPD |
The Rayleigh probability distribution.
|
WeibullPD |
The Weibull probability distribution.
|
Enum | Description |
---|---|
MaximumLikelihoodEstimation.OptimizationMethod |
Indicates which optimization method to use in maximizing the likelihood.
|
BetaPD
,
GammaPD
, NormalPD
and others extend the abstract class
ProbabilityDistribution
. The class
MaximumLikelihoodEstimation
performs
maximum likelihood estimation on subclasses of
ProbabilityDistribution
.
Suppose we have the random sample $$\{x_i, i=1,2, \ldots, N\}$$ from a probability distribution having a density function \( f(x;\theta) \) which depends on a vector of unknown parameters, \( \theta \). The likelihood function given the sample is the product of probability densities evaluated at the sample points $$ L(\theta,\{x_i,i=1,2, \ldots, N \}) = \prod_{i=1,...,N}f(x_i;\theta) $$ The estimator $$ \hat{\theta} = \text{argmax}_{\theta}L(\theta,\{x_i\}) $$ is the maximum likelihood estimator (MLE) for \(\theta\). The problem is usually expressed in terms of the log-likelihood: $$ \hat{\theta} = \text{argmax}_{\theta}\log(L(\theta,\{x_i\})) $$ $$ = \text{argmax}_{\theta}\sum_{i}^{N}\log(f(x_i;\theta)) $$ Or, equivalently, the problem is often expressed as a minimization problem: $$ \hat{\theta} =\text{argmin}_{\theta}\left(-\sum_{i}^{N}\log(f(x_i;\theta))\right ) $$ The likelihood problem is a constrained non-linear optimization problem, where the constraints are determined by the domain of \(\theta\). Numerical optimization is usually successful in solving the likelihood problem for densities having first and second partial derivatives with respect to \(\theta\). Furthermore, under some general regularity conditions, the maximum likelihood estimator is consistent and asymptotically normally distributed with mean equal to the true value of the parameter \(\theta_0\) and variance-covariance matrix equal to the inverse Fisher's Information matrix evaluated at the true value of the parameter: $$ Var(\hat{\theta}) = I(\theta_0)^{-1}=-E_{\theta_0}\left[\frac{\partial^2 \log L}{\partial\theta^2}\right]^{-1} $$ The variance is approximated by the negative inverse hessian of the log-likelihood evaluated at the maximum likelihood estimate. $$ Var(\hat{\theta}) \approx -\left[\frac{\partial^2 \log L}{\partial\theta^2}\right]^{-1}_{\hat{\theta}} $$ See Kendall and Stuart (1979) for further details on the theory of the maximum likelihood.
Copyright © 2020 Rogue Wave Software. All rights reserved.