Showing that an estimator is consistent
up vote
0
down vote
favorite
Let $X_1,X_2,ldots,X_n$ be a random sample from $mathcal{N}(theta,1)$. Consider the following (randomized) estimator of $theta$ given a sample of size $n$:
$$
hat{theta}_n = bar{X} + begin{cases}
0 & text{with probability } 1−1/n,\
n & text{with probability } 1/n.
end{cases}
$$
- Is $hat{theta}_n$ consistent? Prove or disprove.
- Is $hat{theta}_n$ asymptotically unbiased? Prove or disprove.
Any possible hints??
asymptotics sampling parameter-estimation estimation-theory sampling-theory
add a comment |
up vote
0
down vote
favorite
Let $X_1,X_2,ldots,X_n$ be a random sample from $mathcal{N}(theta,1)$. Consider the following (randomized) estimator of $theta$ given a sample of size $n$:
$$
hat{theta}_n = bar{X} + begin{cases}
0 & text{with probability } 1−1/n,\
n & text{with probability } 1/n.
end{cases}
$$
- Is $hat{theta}_n$ consistent? Prove or disprove.
- Is $hat{theta}_n$ asymptotically unbiased? Prove or disprove.
Any possible hints??
asymptotics sampling parameter-estimation estimation-theory sampling-theory
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Let $X_1,X_2,ldots,X_n$ be a random sample from $mathcal{N}(theta,1)$. Consider the following (randomized) estimator of $theta$ given a sample of size $n$:
$$
hat{theta}_n = bar{X} + begin{cases}
0 & text{with probability } 1−1/n,\
n & text{with probability } 1/n.
end{cases}
$$
- Is $hat{theta}_n$ consistent? Prove or disprove.
- Is $hat{theta}_n$ asymptotically unbiased? Prove or disprove.
Any possible hints??
asymptotics sampling parameter-estimation estimation-theory sampling-theory
Let $X_1,X_2,ldots,X_n$ be a random sample from $mathcal{N}(theta,1)$. Consider the following (randomized) estimator of $theta$ given a sample of size $n$:
$$
hat{theta}_n = bar{X} + begin{cases}
0 & text{with probability } 1−1/n,\
n & text{with probability } 1/n.
end{cases}
$$
- Is $hat{theta}_n$ consistent? Prove or disprove.
- Is $hat{theta}_n$ asymptotically unbiased? Prove or disprove.
Any possible hints??
asymptotics sampling parameter-estimation estimation-theory sampling-theory
asymptotics sampling parameter-estimation estimation-theory sampling-theory
asked Nov 28 at 10:28
Newt
237
237
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
2
down vote
accepted
I can't comment (yet), so I'll add this as an answer.
I will assume that $bar{X}_n =frac{1}{n}sum_{i=1}^n X_i$.
1) In this setting, consistency means that $hat{theta}_nto theta$ in probability. For a first hint, try looking at the weak law of large numbers: https://en.wikipedia.org/wiki/Law_of_large_numbers and note that (it is easy to prove that) if $(Z_n)$ and $(Y_n)$ are sequences of random variables which converge in probability to $Z$ and $Y$ respectively, then $(Z_n + Y_n)$ converges in probability to $Z+Y$. In your setting it should be easy to show (directly from the definition) that the random variable:
$$W_n = begin{cases}
0 & text{with probability } 1 -1/n\
n & text{with probability } 1/n
end{cases}$$
converges to 0 in probability. Together, these should allow to answer the question.
2) Asymptotic unbiasedness requires that $mathbb{E}(hat{theta}_n) - theta to 0$ as $ntoinfty$. Here, compute $mathbb{E}(hat{theta}_n) - theta$ and see what you can conclude about it's limit as $ntoinfty$.
Actually, Im having trouble getting E($hat{theta}_n$) and Var($hat{theta}_n$)... how can i calculate them??
– Newt
Nov 28 at 11:06
1
Since expectations are linear, $mathbb{E}(hat{theta}_n) = frac{1}{n}sum_{i=1}^nmathbb{E}(X_i) + mathbb{E}(W_n)$. Why do you need to compute the variance?
– Alex Hodges
Nov 28 at 11:09
Oh, now I understood your method.. But in order to prove that Wn converges in probability to some value, don't we need its variance? I mean for chebyshev inequality
– Newt
Nov 28 at 11:13
1
In this case it's probably easier to show it directly from the definition of convergence in probability. That is, for any $epsilon>0$ we have $P(|W_n|>epsilon)to 0$ as $ntoinfty$
– Alex Hodges
Nov 28 at 11:17
And why does Wn converges to 0 in probability??
– Newt
Nov 28 at 11:17
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3016996%2fshowing-that-an-estimator-is-consistent%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
I can't comment (yet), so I'll add this as an answer.
I will assume that $bar{X}_n =frac{1}{n}sum_{i=1}^n X_i$.
1) In this setting, consistency means that $hat{theta}_nto theta$ in probability. For a first hint, try looking at the weak law of large numbers: https://en.wikipedia.org/wiki/Law_of_large_numbers and note that (it is easy to prove that) if $(Z_n)$ and $(Y_n)$ are sequences of random variables which converge in probability to $Z$ and $Y$ respectively, then $(Z_n + Y_n)$ converges in probability to $Z+Y$. In your setting it should be easy to show (directly from the definition) that the random variable:
$$W_n = begin{cases}
0 & text{with probability } 1 -1/n\
n & text{with probability } 1/n
end{cases}$$
converges to 0 in probability. Together, these should allow to answer the question.
2) Asymptotic unbiasedness requires that $mathbb{E}(hat{theta}_n) - theta to 0$ as $ntoinfty$. Here, compute $mathbb{E}(hat{theta}_n) - theta$ and see what you can conclude about it's limit as $ntoinfty$.
Actually, Im having trouble getting E($hat{theta}_n$) and Var($hat{theta}_n$)... how can i calculate them??
– Newt
Nov 28 at 11:06
1
Since expectations are linear, $mathbb{E}(hat{theta}_n) = frac{1}{n}sum_{i=1}^nmathbb{E}(X_i) + mathbb{E}(W_n)$. Why do you need to compute the variance?
– Alex Hodges
Nov 28 at 11:09
Oh, now I understood your method.. But in order to prove that Wn converges in probability to some value, don't we need its variance? I mean for chebyshev inequality
– Newt
Nov 28 at 11:13
1
In this case it's probably easier to show it directly from the definition of convergence in probability. That is, for any $epsilon>0$ we have $P(|W_n|>epsilon)to 0$ as $ntoinfty$
– Alex Hodges
Nov 28 at 11:17
And why does Wn converges to 0 in probability??
– Newt
Nov 28 at 11:17
add a comment |
up vote
2
down vote
accepted
I can't comment (yet), so I'll add this as an answer.
I will assume that $bar{X}_n =frac{1}{n}sum_{i=1}^n X_i$.
1) In this setting, consistency means that $hat{theta}_nto theta$ in probability. For a first hint, try looking at the weak law of large numbers: https://en.wikipedia.org/wiki/Law_of_large_numbers and note that (it is easy to prove that) if $(Z_n)$ and $(Y_n)$ are sequences of random variables which converge in probability to $Z$ and $Y$ respectively, then $(Z_n + Y_n)$ converges in probability to $Z+Y$. In your setting it should be easy to show (directly from the definition) that the random variable:
$$W_n = begin{cases}
0 & text{with probability } 1 -1/n\
n & text{with probability } 1/n
end{cases}$$
converges to 0 in probability. Together, these should allow to answer the question.
2) Asymptotic unbiasedness requires that $mathbb{E}(hat{theta}_n) - theta to 0$ as $ntoinfty$. Here, compute $mathbb{E}(hat{theta}_n) - theta$ and see what you can conclude about it's limit as $ntoinfty$.
Actually, Im having trouble getting E($hat{theta}_n$) and Var($hat{theta}_n$)... how can i calculate them??
– Newt
Nov 28 at 11:06
1
Since expectations are linear, $mathbb{E}(hat{theta}_n) = frac{1}{n}sum_{i=1}^nmathbb{E}(X_i) + mathbb{E}(W_n)$. Why do you need to compute the variance?
– Alex Hodges
Nov 28 at 11:09
Oh, now I understood your method.. But in order to prove that Wn converges in probability to some value, don't we need its variance? I mean for chebyshev inequality
– Newt
Nov 28 at 11:13
1
In this case it's probably easier to show it directly from the definition of convergence in probability. That is, for any $epsilon>0$ we have $P(|W_n|>epsilon)to 0$ as $ntoinfty$
– Alex Hodges
Nov 28 at 11:17
And why does Wn converges to 0 in probability??
– Newt
Nov 28 at 11:17
add a comment |
up vote
2
down vote
accepted
up vote
2
down vote
accepted
I can't comment (yet), so I'll add this as an answer.
I will assume that $bar{X}_n =frac{1}{n}sum_{i=1}^n X_i$.
1) In this setting, consistency means that $hat{theta}_nto theta$ in probability. For a first hint, try looking at the weak law of large numbers: https://en.wikipedia.org/wiki/Law_of_large_numbers and note that (it is easy to prove that) if $(Z_n)$ and $(Y_n)$ are sequences of random variables which converge in probability to $Z$ and $Y$ respectively, then $(Z_n + Y_n)$ converges in probability to $Z+Y$. In your setting it should be easy to show (directly from the definition) that the random variable:
$$W_n = begin{cases}
0 & text{with probability } 1 -1/n\
n & text{with probability } 1/n
end{cases}$$
converges to 0 in probability. Together, these should allow to answer the question.
2) Asymptotic unbiasedness requires that $mathbb{E}(hat{theta}_n) - theta to 0$ as $ntoinfty$. Here, compute $mathbb{E}(hat{theta}_n) - theta$ and see what you can conclude about it's limit as $ntoinfty$.
I can't comment (yet), so I'll add this as an answer.
I will assume that $bar{X}_n =frac{1}{n}sum_{i=1}^n X_i$.
1) In this setting, consistency means that $hat{theta}_nto theta$ in probability. For a first hint, try looking at the weak law of large numbers: https://en.wikipedia.org/wiki/Law_of_large_numbers and note that (it is easy to prove that) if $(Z_n)$ and $(Y_n)$ are sequences of random variables which converge in probability to $Z$ and $Y$ respectively, then $(Z_n + Y_n)$ converges in probability to $Z+Y$. In your setting it should be easy to show (directly from the definition) that the random variable:
$$W_n = begin{cases}
0 & text{with probability } 1 -1/n\
n & text{with probability } 1/n
end{cases}$$
converges to 0 in probability. Together, these should allow to answer the question.
2) Asymptotic unbiasedness requires that $mathbb{E}(hat{theta}_n) - theta to 0$ as $ntoinfty$. Here, compute $mathbb{E}(hat{theta}_n) - theta$ and see what you can conclude about it's limit as $ntoinfty$.
answered Nov 28 at 10:59
Alex Hodges
6113
6113
Actually, Im having trouble getting E($hat{theta}_n$) and Var($hat{theta}_n$)... how can i calculate them??
– Newt
Nov 28 at 11:06
1
Since expectations are linear, $mathbb{E}(hat{theta}_n) = frac{1}{n}sum_{i=1}^nmathbb{E}(X_i) + mathbb{E}(W_n)$. Why do you need to compute the variance?
– Alex Hodges
Nov 28 at 11:09
Oh, now I understood your method.. But in order to prove that Wn converges in probability to some value, don't we need its variance? I mean for chebyshev inequality
– Newt
Nov 28 at 11:13
1
In this case it's probably easier to show it directly from the definition of convergence in probability. That is, for any $epsilon>0$ we have $P(|W_n|>epsilon)to 0$ as $ntoinfty$
– Alex Hodges
Nov 28 at 11:17
And why does Wn converges to 0 in probability??
– Newt
Nov 28 at 11:17
add a comment |
Actually, Im having trouble getting E($hat{theta}_n$) and Var($hat{theta}_n$)... how can i calculate them??
– Newt
Nov 28 at 11:06
1
Since expectations are linear, $mathbb{E}(hat{theta}_n) = frac{1}{n}sum_{i=1}^nmathbb{E}(X_i) + mathbb{E}(W_n)$. Why do you need to compute the variance?
– Alex Hodges
Nov 28 at 11:09
Oh, now I understood your method.. But in order to prove that Wn converges in probability to some value, don't we need its variance? I mean for chebyshev inequality
– Newt
Nov 28 at 11:13
1
In this case it's probably easier to show it directly from the definition of convergence in probability. That is, for any $epsilon>0$ we have $P(|W_n|>epsilon)to 0$ as $ntoinfty$
– Alex Hodges
Nov 28 at 11:17
And why does Wn converges to 0 in probability??
– Newt
Nov 28 at 11:17
Actually, Im having trouble getting E($hat{theta}_n$) and Var($hat{theta}_n$)... how can i calculate them??
– Newt
Nov 28 at 11:06
Actually, Im having trouble getting E($hat{theta}_n$) and Var($hat{theta}_n$)... how can i calculate them??
– Newt
Nov 28 at 11:06
1
1
Since expectations are linear, $mathbb{E}(hat{theta}_n) = frac{1}{n}sum_{i=1}^nmathbb{E}(X_i) + mathbb{E}(W_n)$. Why do you need to compute the variance?
– Alex Hodges
Nov 28 at 11:09
Since expectations are linear, $mathbb{E}(hat{theta}_n) = frac{1}{n}sum_{i=1}^nmathbb{E}(X_i) + mathbb{E}(W_n)$. Why do you need to compute the variance?
– Alex Hodges
Nov 28 at 11:09
Oh, now I understood your method.. But in order to prove that Wn converges in probability to some value, don't we need its variance? I mean for chebyshev inequality
– Newt
Nov 28 at 11:13
Oh, now I understood your method.. But in order to prove that Wn converges in probability to some value, don't we need its variance? I mean for chebyshev inequality
– Newt
Nov 28 at 11:13
1
1
In this case it's probably easier to show it directly from the definition of convergence in probability. That is, for any $epsilon>0$ we have $P(|W_n|>epsilon)to 0$ as $ntoinfty$
– Alex Hodges
Nov 28 at 11:17
In this case it's probably easier to show it directly from the definition of convergence in probability. That is, for any $epsilon>0$ we have $P(|W_n|>epsilon)to 0$ as $ntoinfty$
– Alex Hodges
Nov 28 at 11:17
And why does Wn converges to 0 in probability??
– Newt
Nov 28 at 11:17
And why does Wn converges to 0 in probability??
– Newt
Nov 28 at 11:17
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3016996%2fshowing-that-an-estimator-is-consistent%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown