Model selection is the task of selecting a model from among various candidates on the basis of performance criterion to choose the best one.[1] In the context of machine learning and more generally statistical analysis, this may be the selection of a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the data collected is well-suited to the problem of model selection. Given candidate models of similar predictive or explanatory power, the simplest model is most likely to be the best choice (Occam's razor).
Konishi & Kitagawa (2008, p. 75) state, "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling". Relatedly, Cox (2006, p. 197) has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".
Model selection may also refer to the problem of selecting a few representative models from a large set of computational models for the purpose of decision making or optimization under uncertainty.[2]
In machine learning, algorithmic approaches to model selection include feature selection, hyperparameter optimization, and statistical learning theory.
Introduction
[
edit
]
The scientific observation cycle.In its most basic forms, model selection is one of the fundamental tasks of scientific inquiry. Determining the principle that explains a series of observations is often linked directly to a mathematical model predicting those observations. For example, when Galileo performed his inclined plane experiments, he demonstrated that the motion of the balls fitted the parabola predicted by his model[citation needed].
Of the countless number of possible mechanisms and processes that could have produced the data, how can one even begin to choose the best model? The mathematical approach commonly taken decides among a set of candidate models; this set must be chosen by the researcher. Often simple models such as polynomials are used, at least initially[citation needed]. Burnham & Anderson (2002) emphasize throughout their book the importance of choosing models based on sound scientific principles, such as understanding of the phenomenological processes or mechanisms (e.g., chemical reactions) underlying the data.
Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity. More complex models will be better able to adapt their shape to fit the data (for example, a fifth-order polynomial can exactly fit six points), but the additional parameters may not represent anything useful. (Perhaps those six points are really just randomly distributed about a straight line.) Goodness of fit is generally determined using a likelihood ratio approach, or an approximation of this, leading to a chi-squared test. The complexity is generally measured by counting the number of parameters in the model.
Model selection techniques can be considered as estimators of some physical quantity, such as the probability of the model producing the given data. The bias and variance are both important measures of the quality of this estimator; efficiency is also often considered.
A standard example of model selection is that of curve fitting, where, given a set of points and other background knowledge (e.g. points are a result of i.i.d. samples), we must select a curve that describes the function that generated the points.
Two directions of model selection
[
edit
]
There are two main objectives in inference and learning from data. One is for scientific discovery, also called statistical inference, understanding of the underlying data-generating mechanism and interpretation of the nature of the data. Another objective of learning from data is for predicting future or unseen observations, also called Statistical Prediction. In the second objective, the data scientist does not necessarily concern an accurate probabilistic description of the data. Of course, one may also be interested in both directions.
In line with the two different objectives, model selection can also have two directions: model selection for inference and model selection for prediction.[3] The first direction is to identify the best model for the data, which will preferably provide a reliable characterization of the sources of uncertainty for scientific interpretation. For this goal, it is significantly important that the selected model is not too sensitive to the sample size. Accordingly, an appropriate notion for evaluating model selection is the selection consistency, meaning that the most robust candidate will be consistently selected given sufficiently many data samples.
The second direction is to choose a model as machinery to offer excellent predictive performance. For the latter, however, the selected model may simply be the lucky winner among a few close competitors, yet the predictive performance can still be the best possible. If so, the model selection is fine for the second goal (prediction), but the use of the selected model for insight and interpretation may be severely unreliable and misleading.[3] Moreover, for very complex models selected this way, even predictions may be unreasonable for data only slightly different from those on which the selection was made.[4]
Criteria
[
edit
]
Below is a list of criteria for model selection. The most commonly used information criteria are (i) the Akaike information criterion and (ii) the Bayes factor and/or the Bayesian information criterion (which to some extent approximates the Bayes factor), see Stoica & Selen (2004) for a review.
Among these criteria, cross-validation is typically the most accurate, and computationally the most expensive, for supervised learning problems.[citation needed]
Burnham & Anderson (2002, §6.3) say the following:
There is a variety of model selection methods. However, from the point of view of statistical performance of a method, and intended context of its use, there are only two distinct classes of methods: These have been labeled efficient and consistent. (...) Under the frequentist paradigm for model selection one generally has three main approaches: (I) optimization of some selection criteria, (II) tests of hypotheses, and (III) ad hoc methods.
See also
[
edit
]
Notes
[
edit
]
References
[
edit
]
Model selection criteria are rules used to select the best statistical model among a set of candidate models.
In this lecture we focus on criteria used to select models that have been estimated by the maximum likelihood method.
For criteria used to select linear regression models, go to this lecture.
We will carefully explain the theory behind model selection criteria. But before doing that, let us preview how model selection criteria work:
we define a set of candidate models;
we estimate the parameters of each model by maximum likelihood;
we use the estimated parameters to compute the log-likelihood of each model;
we assign a score to each model; the score has the following characteristics:
it is decreasing in the log-likelihood (the better the model fits the data, the lower the score);
it is increasing in the number of parameters (the more complex the model is, the higher the score);
we choose the model that has the lowest score.
Let us see how model scores are calculated, according to some popular criteria.
Denote by:
the number of parameters;
the sample size;
the log-likelihood.
Model scores are calculated as follows:
Akaike Information Criterion (AIC):
Corrected Akaike Information Criterion (AICc):
Hannan-Quinn Information Criterion (HQIC):
Bayesian Information Criterion (BIC):
All of these criteria generate a trade-off between goodness of fit (measured by the log-likelihood ) and model complexity (the number of parameters ).
The trade-off discourages overfitting.
However, some of these criteria impose a stronger penalty for model complexity.
In the list above, the criteria are ordered based on the strength of the penalty: the AIC imposes the mildest penalty, while the BIC has the strongest one.
We now explain the theory behind selection criteria in more detail.
First of all, we need to define precisely what we mean by statistical model.
A statistical model is a set of probability distributions that could have generated the data we are analyzing.
Suppose that we observe data points independently drawn from the same probability distribution (in technical terms, they are IID draws).
If we assume that the draws come from a normal distribution, then we are formulating a statistical model: we are restricting our attention to the set of all normal distributions and we are ruling out all the probability distributions that are not normal.
The normal distribution has two parameters, the mean and the variance .
So, the set of distributions we are considering (the statistical model) includes many normal distributions: one for each possible couple .
If instead we assume that the data have been drawn from an exponential distribution, then we are formulating an alternative model.
The exponential distribution has one parameter , called rate parameter.
Our statistical model is a set including many possible distributions: one for each possible value of the parameter .
This example, although admittedly unrealistic, introduces in a simple manner the problem that we are going to deal with: how do we select one model (normal vs exponential distribution in the example) if we deem that two or more alternative models are plausible?
Let us denote the vector of observed data by .
We assume that the data are continuous.
Therefore, a model for is a family of joint probability density functionsparametrized by a parameter vector for each model .
We focus on continuous distributions in order to simplify the discussion, but everything we say is valid also for discrete distributions, with straightforward modifications (replace probability densities with probability mass functions).
We denote by the unknown probability distribution that generated the data.
Finally, is the index of the model selected by a model selection criterion. Clearly, can range between and .
In the example above the vector contains the data points:
The number of models is .
The two parameter vectors arefor the normal distribution andfor the exponential distribution.
The joint probability density function for the first model is because the joint density of a vector of independent random variables is equal to the product of their marginal densities.
The joint probability density function for the second model iswhere is an indicator function (equal to 1 if and to 0 otherwise).
We assume that model parameters are estimated by maximum likelihood (ML).
We denote by the ML estimates of the parameters of the models.
If you want to see some examples of how ML estimates are derived, you can have a look at these two lectures:
Akaike (1973) was the first to propose a general criterion for selecting models estimated by maximum likelihood.
He proposed to minimize the expected dissimilarity between the chosen model at the ML estimate and the true distribution .
The dissimilarity between an estimated model and the true distribution is measured by the Kullback-Leibler divergence where the expected value is with respect to the true density
The expected dissimilarity is computed as where the expectation is over the sampling distribution of , which, being a function of the sample , is regarded as stochastic.
Ideally, we would like to select the model that minimizes the expected dissimilarity:
However, the expected dissimilarity cannot be computed exactly because the true distribution and the sampling distribution of are unknown.
Akaike (1973) proposed an approximation to the expected dissimilarity that can be easily computed, giving rise to the so-called Akaike Information Criterion (AIC).
As proved, for example, by Burnham and Anderson (2004), other popular selection criteria such as the AIC corrected for small-sample bias (AICc; Sugiura 1978, Hurvich and Tsai 1989) and the Bayesian Information Criterion (BIC; Schwarz 1978) are based on different approximations of the same measure of expected dissimilarity.
We briefly present here the most popular selection criteria.
According to the Akaike Information Criterion, the selected model solves the minimization problemwhere the value of the -th model is where is the number of parameters to be estimated in the -th model.
Note that any linear transformation applied to all model values does not change the selected model. As a matter of fact, many references define the value of the -th model as
An approximation that is more precise in small samples is the so-called corrected Akaike Information Criterion (AICc), according to which the value to be minimized iswhere is the size of the sample being used for estimation.
Another popular criterion is the Bayesian Information Criterion, according to which the selected model is the one that achieves the minimum value of
Its name is due to the fact that it can be justified by using some Bayesian arguments.
If the sample size is large, it penalizes model complexity much more than Akaike's criteria.
A compromise between Akaike's criteria (mild penalty for the number of parameters) and the Bayesian criterion (large penalty) is provided by the Hannan-Quinn Information Criterion:
As you might have noticed, all of these criteria penalize the dimension of the model: the higher the number of parameters is, the more model is penalized.
This penalty for complexity is typical of model selection criteria: a model with many parameters is more likely to over-fit, that is, to have a spuriously high value of the log-likelihood .
For a discussion of over-fitting see the lecture on the R squared of a linear regression.
The complexity penalty is also related to the so-called bias-variance trade-off: by increasing model complexity, we usually decrease the bias and increase the variance.
Beyond a certain degree of complexity, increases in variance are larger than reductions in bias, and, as a consequence, the quality of our inferences becomes worse.
Akaike, H., 1973. Information theory as an extension of the maximum likelihood principle. In: Petrov, BN and Csaki, F. In Second International Symposium on Information Theory. Akademiai Kiado, Budapest, pp. 276-281.
Burnham, K.P. and Anderson, D.R., 2004. Multimodel inference: understanding AIC and BIC in model selection. Sociological methods & research, 33(2), pp. 261-304.
Hurvich, C.M. and Tsai, C.L., 1989. Regression and time series model selection in small samples. Biometrika, 76(2), pp. 297-307.
Schwarz, G., 1978. Estimating the dimension of a model. The annals of statistics, 6(2), pp. 461-464.
Sugiura, N., 1978. Further analysis of the data by Akaike's information criterion and the finite corrections, in Statistics Theory and Methods, 7(1), pp. 13-26.
Please cite as:
Taboga, Marco (2021). "Model selection criteria", Lectures on probability theory and mathematical statistics. Kindle Direct Publishing. Online appendix. https://www.statlect.com/fundamentals-of-statistics/model-selection-criteria.
For more Ac Type Heater, Tower Ptc Heater, Flexible Ptc Heaterinformation, please contact us. We will provide professional answers.