Why Normality assumption in linear regression
$begingroup$
My question is very simple: why we choose normal as the distribution that error term follows in the assumption of linear regression? Why we don't choose others like uniform, t or whatever?
regression mathematical-statistics normal-distribution error linear
$endgroup$
add a comment |
$begingroup$
My question is very simple: why we choose normal as the distribution that error term follows in the assumption of linear regression? Why we don't choose others like uniform, t or whatever?
regression mathematical-statistics normal-distribution error linear
$endgroup$
$begingroup$
We don't choose the normal assumption. It just happens to be the case that when the error is normal, the model coefficients exactly follow a normal distribution and an exact F-test can be used to test hypotheses about them.
$endgroup$
– AdamO
2 hours ago
1
$begingroup$
Because the math works out easily enough that people could use it before modern computers.
$endgroup$
– Nat
2 hours ago
add a comment |
$begingroup$
My question is very simple: why we choose normal as the distribution that error term follows in the assumption of linear regression? Why we don't choose others like uniform, t or whatever?
regression mathematical-statistics normal-distribution error linear
$endgroup$
My question is very simple: why we choose normal as the distribution that error term follows in the assumption of linear regression? Why we don't choose others like uniform, t or whatever?
regression mathematical-statistics normal-distribution error linear
regression mathematical-statistics normal-distribution error linear
asked 3 hours ago
Master ShiMaster Shi
211
211
$begingroup$
We don't choose the normal assumption. It just happens to be the case that when the error is normal, the model coefficients exactly follow a normal distribution and an exact F-test can be used to test hypotheses about them.
$endgroup$
– AdamO
2 hours ago
1
$begingroup$
Because the math works out easily enough that people could use it before modern computers.
$endgroup$
– Nat
2 hours ago
add a comment |
$begingroup$
We don't choose the normal assumption. It just happens to be the case that when the error is normal, the model coefficients exactly follow a normal distribution and an exact F-test can be used to test hypotheses about them.
$endgroup$
– AdamO
2 hours ago
1
$begingroup$
Because the math works out easily enough that people could use it before modern computers.
$endgroup$
– Nat
2 hours ago
$begingroup$
We don't choose the normal assumption. It just happens to be the case that when the error is normal, the model coefficients exactly follow a normal distribution and an exact F-test can be used to test hypotheses about them.
$endgroup$
– AdamO
2 hours ago
$begingroup$
We don't choose the normal assumption. It just happens to be the case that when the error is normal, the model coefficients exactly follow a normal distribution and an exact F-test can be used to test hypotheses about them.
$endgroup$
– AdamO
2 hours ago
1
1
$begingroup$
Because the math works out easily enough that people could use it before modern computers.
$endgroup$
– Nat
2 hours ago
$begingroup$
Because the math works out easily enough that people could use it before modern computers.
$endgroup$
– Nat
2 hours ago
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
You can choose another error distribution; they basically just change the loss function.
This is certainly done.
Laplace (double exponential errors) correspond to least absolute deviations regression/$L_1$ regression (which numerous posts on site discuss). Regressions with t-errors are occasionally used (in some cases because they're more robust to gross errors), though they can have a disadvantage -- the likelihood (and therefore the negative of the loss) can have multiple modes.
Uniform errors correspond to an $L_infty$ loss (minimize the maximum deviation); such regression is sometimes called Chebyshev approximation (though beware, since there's another thing with essentially the same name). Again, this is sometimes done (indeed for simple regression and smallish data sets with bounded errors with constant spread the fit is often easy enough to find by hand, directly on a plot, though in practice you can use linear programming methods, or other algorithms; indeed, $L_infty$ and $L_1$ regression problems are duals of each other, which can lead to sometimes convenient shortcuts for some problems).
Many other choices are possible and quite a few have been used in practice.
[Note that if you have additive, independent, constant-spread errors with a density of the form $k,exp(-c.g(varepsilon))$, maximizing the likelihood will correspond to minimizing $sum_i g(e_i)$, where $e_i$ is the $i$th residual.]
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f395011%2fwhy-normality-assumption-in-linear-regression%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
You can choose another error distribution; they basically just change the loss function.
This is certainly done.
Laplace (double exponential errors) correspond to least absolute deviations regression/$L_1$ regression (which numerous posts on site discuss). Regressions with t-errors are occasionally used (in some cases because they're more robust to gross errors), though they can have a disadvantage -- the likelihood (and therefore the negative of the loss) can have multiple modes.
Uniform errors correspond to an $L_infty$ loss (minimize the maximum deviation); such regression is sometimes called Chebyshev approximation (though beware, since there's another thing with essentially the same name). Again, this is sometimes done (indeed for simple regression and smallish data sets with bounded errors with constant spread the fit is often easy enough to find by hand, directly on a plot, though in practice you can use linear programming methods, or other algorithms; indeed, $L_infty$ and $L_1$ regression problems are duals of each other, which can lead to sometimes convenient shortcuts for some problems).
Many other choices are possible and quite a few have been used in practice.
[Note that if you have additive, independent, constant-spread errors with a density of the form $k,exp(-c.g(varepsilon))$, maximizing the likelihood will correspond to minimizing $sum_i g(e_i)$, where $e_i$ is the $i$th residual.]
$endgroup$
add a comment |
$begingroup$
You can choose another error distribution; they basically just change the loss function.
This is certainly done.
Laplace (double exponential errors) correspond to least absolute deviations regression/$L_1$ regression (which numerous posts on site discuss). Regressions with t-errors are occasionally used (in some cases because they're more robust to gross errors), though they can have a disadvantage -- the likelihood (and therefore the negative of the loss) can have multiple modes.
Uniform errors correspond to an $L_infty$ loss (minimize the maximum deviation); such regression is sometimes called Chebyshev approximation (though beware, since there's another thing with essentially the same name). Again, this is sometimes done (indeed for simple regression and smallish data sets with bounded errors with constant spread the fit is often easy enough to find by hand, directly on a plot, though in practice you can use linear programming methods, or other algorithms; indeed, $L_infty$ and $L_1$ regression problems are duals of each other, which can lead to sometimes convenient shortcuts for some problems).
Many other choices are possible and quite a few have been used in practice.
[Note that if you have additive, independent, constant-spread errors with a density of the form $k,exp(-c.g(varepsilon))$, maximizing the likelihood will correspond to minimizing $sum_i g(e_i)$, where $e_i$ is the $i$th residual.]
$endgroup$
add a comment |
$begingroup$
You can choose another error distribution; they basically just change the loss function.
This is certainly done.
Laplace (double exponential errors) correspond to least absolute deviations regression/$L_1$ regression (which numerous posts on site discuss). Regressions with t-errors are occasionally used (in some cases because they're more robust to gross errors), though they can have a disadvantage -- the likelihood (and therefore the negative of the loss) can have multiple modes.
Uniform errors correspond to an $L_infty$ loss (minimize the maximum deviation); such regression is sometimes called Chebyshev approximation (though beware, since there's another thing with essentially the same name). Again, this is sometimes done (indeed for simple regression and smallish data sets with bounded errors with constant spread the fit is often easy enough to find by hand, directly on a plot, though in practice you can use linear programming methods, or other algorithms; indeed, $L_infty$ and $L_1$ regression problems are duals of each other, which can lead to sometimes convenient shortcuts for some problems).
Many other choices are possible and quite a few have been used in practice.
[Note that if you have additive, independent, constant-spread errors with a density of the form $k,exp(-c.g(varepsilon))$, maximizing the likelihood will correspond to minimizing $sum_i g(e_i)$, where $e_i$ is the $i$th residual.]
$endgroup$
You can choose another error distribution; they basically just change the loss function.
This is certainly done.
Laplace (double exponential errors) correspond to least absolute deviations regression/$L_1$ regression (which numerous posts on site discuss). Regressions with t-errors are occasionally used (in some cases because they're more robust to gross errors), though they can have a disadvantage -- the likelihood (and therefore the negative of the loss) can have multiple modes.
Uniform errors correspond to an $L_infty$ loss (minimize the maximum deviation); such regression is sometimes called Chebyshev approximation (though beware, since there's another thing with essentially the same name). Again, this is sometimes done (indeed for simple regression and smallish data sets with bounded errors with constant spread the fit is often easy enough to find by hand, directly on a plot, though in practice you can use linear programming methods, or other algorithms; indeed, $L_infty$ and $L_1$ regression problems are duals of each other, which can lead to sometimes convenient shortcuts for some problems).
Many other choices are possible and quite a few have been used in practice.
[Note that if you have additive, independent, constant-spread errors with a density of the form $k,exp(-c.g(varepsilon))$, maximizing the likelihood will correspond to minimizing $sum_i g(e_i)$, where $e_i$ is the $i$th residual.]
edited 1 hour ago
answered 3 hours ago
Glen_b♦Glen_b
212k22409758
212k22409758
add a comment |
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f395011%2fwhy-normality-assumption-in-linear-regression%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
We don't choose the normal assumption. It just happens to be the case that when the error is normal, the model coefficients exactly follow a normal distribution and an exact F-test can be used to test hypotheses about them.
$endgroup$
– AdamO
2 hours ago
1
$begingroup$
Because the math works out easily enough that people could use it before modern computers.
$endgroup$
– Nat
2 hours ago