Clarifying the statement $f(x,y) = f_X(x) f_Y(y)$ for all $x, y$ (when $X$ and $Y$ are independent)
up vote
1
down vote
favorite
The textbook A First Course in Probability by Ross defines random variables $X$ and $Y$ to be independent when the following condition is satisfied: if $A$ and $B$ are subsets of $mathbb R$, then
$$
tag{0}P(X in A, Y in B) = P(X in A) P(Y in B).
$$
(I think technically we should assume that $X$ and $Y$ are Borel, but Ross glosses over that issue.)
The textbook goes on to state that if $X$ and $Y$ are jointly continuous random variables with PDFs $f_X$ and $f_Y$ and joint PDF $f$, then
$$
tag{1} f(x,y) = f_X(x) f_Y(y) quad text{for all } x,y.
$$
However, I think this statement is not correct because $f_X$ and $f_Y$ are only defined up to sets of measure $0$. (For example, I think you could redefine $f_X$ arbitrarily on a set of measure $0$, and you would still have a perfectly valid PDF for $X$.)
Question: So how can we state equation (1) in a way that is correct (so that we are not speaking nonsense to students) but which is also understandable to undergrads?
Further details: Let $F$ be the joint CDF for $X$ and $Y$,
and let $F_X$ and $F_Y$ be the CDFs for $X$ and $Y$ respectively.
It can be shown that equation (0) is equivalent to
$$
F(a,b) = F_X(a) F_Y(b)
$$
for all $a, b in mathbb R$. In other words,
$$
F(a,b) = int_{-infty}^a f_X(x) , dx
int_{-infty}^b f_Y(y) , dy
$$
for all $a,b in mathbb R$.
If $f_X$ is continuous at $a$, then differentiating both sides with respect to $a$ yields
$$
frac{partial F}{partial a} = f_X(a) int_{-infty}^b f_Y(y) , dy.
$$
If $f_Y$ is continuous at $b$, then differentiating with respect to $b$ yields
$$
frac{partial^2 F}{partial a partial b}
= f_X(a) f_Y(b).
$$
If we knew that
$$
tag{2} frac{partial^2 F}{partial a partial b} = f(a,b)
$$ then we would have established that $f(a,b) = f_X(a) f_Y(b)$.
So let's see what assumptions we need in order to conclude that $frac{partial^2 F}{partial a partial b} = f(a,b)$. We know that
$$
tag{3} F(a,b) = int_{-infty}^a int_{-infty}^b f(x,y) , dy , dx.
$$
If the function $x mapsto int_{-infty}^b f(x,y) , dy$ is continuous
at $a$, then by the fundamental theorem of calculus differentiating both sides of (3) with respect to $a$ yields
$$
frac{partial F}{partial a} = int_{-infty}^b f(a,y) , dy.
$$
If the function $y mapsto f(a,y)$ is continuous at $b$, then differentiating with respect to $b$ yields
$$
frac{partial^2 F}{partial a partial b} F(a,b) = f(a,b).
$$
So, in order to establish (2), we needed the following two assumptions:
- The function $x mapsto int_{-infty}^b f(x,y) , dy$ is continuous
at $a$. - The function $y mapsto f(a,y)$ is continuous at $b$.
Question: What is a nice, simple assumption I can state which will guarantee that these two conditions are met? If $f$ is continuous at $(a,b)$, then the second assumption is satisfied. But I don't see that the continuity of $f$ at $(a,b)$ would guarantee that the first condition is satisfied.
probability
add a comment |
up vote
1
down vote
favorite
The textbook A First Course in Probability by Ross defines random variables $X$ and $Y$ to be independent when the following condition is satisfied: if $A$ and $B$ are subsets of $mathbb R$, then
$$
tag{0}P(X in A, Y in B) = P(X in A) P(Y in B).
$$
(I think technically we should assume that $X$ and $Y$ are Borel, but Ross glosses over that issue.)
The textbook goes on to state that if $X$ and $Y$ are jointly continuous random variables with PDFs $f_X$ and $f_Y$ and joint PDF $f$, then
$$
tag{1} f(x,y) = f_X(x) f_Y(y) quad text{for all } x,y.
$$
However, I think this statement is not correct because $f_X$ and $f_Y$ are only defined up to sets of measure $0$. (For example, I think you could redefine $f_X$ arbitrarily on a set of measure $0$, and you would still have a perfectly valid PDF for $X$.)
Question: So how can we state equation (1) in a way that is correct (so that we are not speaking nonsense to students) but which is also understandable to undergrads?
Further details: Let $F$ be the joint CDF for $X$ and $Y$,
and let $F_X$ and $F_Y$ be the CDFs for $X$ and $Y$ respectively.
It can be shown that equation (0) is equivalent to
$$
F(a,b) = F_X(a) F_Y(b)
$$
for all $a, b in mathbb R$. In other words,
$$
F(a,b) = int_{-infty}^a f_X(x) , dx
int_{-infty}^b f_Y(y) , dy
$$
for all $a,b in mathbb R$.
If $f_X$ is continuous at $a$, then differentiating both sides with respect to $a$ yields
$$
frac{partial F}{partial a} = f_X(a) int_{-infty}^b f_Y(y) , dy.
$$
If $f_Y$ is continuous at $b$, then differentiating with respect to $b$ yields
$$
frac{partial^2 F}{partial a partial b}
= f_X(a) f_Y(b).
$$
If we knew that
$$
tag{2} frac{partial^2 F}{partial a partial b} = f(a,b)
$$ then we would have established that $f(a,b) = f_X(a) f_Y(b)$.
So let's see what assumptions we need in order to conclude that $frac{partial^2 F}{partial a partial b} = f(a,b)$. We know that
$$
tag{3} F(a,b) = int_{-infty}^a int_{-infty}^b f(x,y) , dy , dx.
$$
If the function $x mapsto int_{-infty}^b f(x,y) , dy$ is continuous
at $a$, then by the fundamental theorem of calculus differentiating both sides of (3) with respect to $a$ yields
$$
frac{partial F}{partial a} = int_{-infty}^b f(a,y) , dy.
$$
If the function $y mapsto f(a,y)$ is continuous at $b$, then differentiating with respect to $b$ yields
$$
frac{partial^2 F}{partial a partial b} F(a,b) = f(a,b).
$$
So, in order to establish (2), we needed the following two assumptions:
- The function $x mapsto int_{-infty}^b f(x,y) , dy$ is continuous
at $a$. - The function $y mapsto f(a,y)$ is continuous at $b$.
Question: What is a nice, simple assumption I can state which will guarantee that these two conditions are met? If $f$ is continuous at $(a,b)$, then the second assumption is satisfied. But I don't see that the continuity of $f$ at $(a,b)$ would guarantee that the first condition is satisfied.
probability
1
You could say that (1) holds for all $(x,y)in U{times} V$, for any open $U$ and $V$ for which $f_X$, $f_Y$, and $f$ are continuous on $U$, $V$, and $U{times} V$ respectively. Most density functions seen in undergraduate courses are continuous, so you are not losing many examples this way.
– kimchi lover
Nov 26 at 2:19
@kimchilover Thank you, that is the kind of suggestion I'm looking for.
– eternalGoldenBraid
Nov 26 at 3:59
1
The correct statement is: $f(x, y) = f_X(x) f_Y(y)$ holds for almost every $x,y$ (with respect to 2-dimensional Lebesgue measure).
– Song
Nov 26 at 7:47
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
The textbook A First Course in Probability by Ross defines random variables $X$ and $Y$ to be independent when the following condition is satisfied: if $A$ and $B$ are subsets of $mathbb R$, then
$$
tag{0}P(X in A, Y in B) = P(X in A) P(Y in B).
$$
(I think technically we should assume that $X$ and $Y$ are Borel, but Ross glosses over that issue.)
The textbook goes on to state that if $X$ and $Y$ are jointly continuous random variables with PDFs $f_X$ and $f_Y$ and joint PDF $f$, then
$$
tag{1} f(x,y) = f_X(x) f_Y(y) quad text{for all } x,y.
$$
However, I think this statement is not correct because $f_X$ and $f_Y$ are only defined up to sets of measure $0$. (For example, I think you could redefine $f_X$ arbitrarily on a set of measure $0$, and you would still have a perfectly valid PDF for $X$.)
Question: So how can we state equation (1) in a way that is correct (so that we are not speaking nonsense to students) but which is also understandable to undergrads?
Further details: Let $F$ be the joint CDF for $X$ and $Y$,
and let $F_X$ and $F_Y$ be the CDFs for $X$ and $Y$ respectively.
It can be shown that equation (0) is equivalent to
$$
F(a,b) = F_X(a) F_Y(b)
$$
for all $a, b in mathbb R$. In other words,
$$
F(a,b) = int_{-infty}^a f_X(x) , dx
int_{-infty}^b f_Y(y) , dy
$$
for all $a,b in mathbb R$.
If $f_X$ is continuous at $a$, then differentiating both sides with respect to $a$ yields
$$
frac{partial F}{partial a} = f_X(a) int_{-infty}^b f_Y(y) , dy.
$$
If $f_Y$ is continuous at $b$, then differentiating with respect to $b$ yields
$$
frac{partial^2 F}{partial a partial b}
= f_X(a) f_Y(b).
$$
If we knew that
$$
tag{2} frac{partial^2 F}{partial a partial b} = f(a,b)
$$ then we would have established that $f(a,b) = f_X(a) f_Y(b)$.
So let's see what assumptions we need in order to conclude that $frac{partial^2 F}{partial a partial b} = f(a,b)$. We know that
$$
tag{3} F(a,b) = int_{-infty}^a int_{-infty}^b f(x,y) , dy , dx.
$$
If the function $x mapsto int_{-infty}^b f(x,y) , dy$ is continuous
at $a$, then by the fundamental theorem of calculus differentiating both sides of (3) with respect to $a$ yields
$$
frac{partial F}{partial a} = int_{-infty}^b f(a,y) , dy.
$$
If the function $y mapsto f(a,y)$ is continuous at $b$, then differentiating with respect to $b$ yields
$$
frac{partial^2 F}{partial a partial b} F(a,b) = f(a,b).
$$
So, in order to establish (2), we needed the following two assumptions:
- The function $x mapsto int_{-infty}^b f(x,y) , dy$ is continuous
at $a$. - The function $y mapsto f(a,y)$ is continuous at $b$.
Question: What is a nice, simple assumption I can state which will guarantee that these two conditions are met? If $f$ is continuous at $(a,b)$, then the second assumption is satisfied. But I don't see that the continuity of $f$ at $(a,b)$ would guarantee that the first condition is satisfied.
probability
The textbook A First Course in Probability by Ross defines random variables $X$ and $Y$ to be independent when the following condition is satisfied: if $A$ and $B$ are subsets of $mathbb R$, then
$$
tag{0}P(X in A, Y in B) = P(X in A) P(Y in B).
$$
(I think technically we should assume that $X$ and $Y$ are Borel, but Ross glosses over that issue.)
The textbook goes on to state that if $X$ and $Y$ are jointly continuous random variables with PDFs $f_X$ and $f_Y$ and joint PDF $f$, then
$$
tag{1} f(x,y) = f_X(x) f_Y(y) quad text{for all } x,y.
$$
However, I think this statement is not correct because $f_X$ and $f_Y$ are only defined up to sets of measure $0$. (For example, I think you could redefine $f_X$ arbitrarily on a set of measure $0$, and you would still have a perfectly valid PDF for $X$.)
Question: So how can we state equation (1) in a way that is correct (so that we are not speaking nonsense to students) but which is also understandable to undergrads?
Further details: Let $F$ be the joint CDF for $X$ and $Y$,
and let $F_X$ and $F_Y$ be the CDFs for $X$ and $Y$ respectively.
It can be shown that equation (0) is equivalent to
$$
F(a,b) = F_X(a) F_Y(b)
$$
for all $a, b in mathbb R$. In other words,
$$
F(a,b) = int_{-infty}^a f_X(x) , dx
int_{-infty}^b f_Y(y) , dy
$$
for all $a,b in mathbb R$.
If $f_X$ is continuous at $a$, then differentiating both sides with respect to $a$ yields
$$
frac{partial F}{partial a} = f_X(a) int_{-infty}^b f_Y(y) , dy.
$$
If $f_Y$ is continuous at $b$, then differentiating with respect to $b$ yields
$$
frac{partial^2 F}{partial a partial b}
= f_X(a) f_Y(b).
$$
If we knew that
$$
tag{2} frac{partial^2 F}{partial a partial b} = f(a,b)
$$ then we would have established that $f(a,b) = f_X(a) f_Y(b)$.
So let's see what assumptions we need in order to conclude that $frac{partial^2 F}{partial a partial b} = f(a,b)$. We know that
$$
tag{3} F(a,b) = int_{-infty}^a int_{-infty}^b f(x,y) , dy , dx.
$$
If the function $x mapsto int_{-infty}^b f(x,y) , dy$ is continuous
at $a$, then by the fundamental theorem of calculus differentiating both sides of (3) with respect to $a$ yields
$$
frac{partial F}{partial a} = int_{-infty}^b f(a,y) , dy.
$$
If the function $y mapsto f(a,y)$ is continuous at $b$, then differentiating with respect to $b$ yields
$$
frac{partial^2 F}{partial a partial b} F(a,b) = f(a,b).
$$
So, in order to establish (2), we needed the following two assumptions:
- The function $x mapsto int_{-infty}^b f(x,y) , dy$ is continuous
at $a$. - The function $y mapsto f(a,y)$ is continuous at $b$.
Question: What is a nice, simple assumption I can state which will guarantee that these two conditions are met? If $f$ is continuous at $(a,b)$, then the second assumption is satisfied. But I don't see that the continuity of $f$ at $(a,b)$ would guarantee that the first condition is satisfied.
probability
probability
edited Nov 26 at 6:28
asked Nov 26 at 1:28
eternalGoldenBraid
707314
707314
1
You could say that (1) holds for all $(x,y)in U{times} V$, for any open $U$ and $V$ for which $f_X$, $f_Y$, and $f$ are continuous on $U$, $V$, and $U{times} V$ respectively. Most density functions seen in undergraduate courses are continuous, so you are not losing many examples this way.
– kimchi lover
Nov 26 at 2:19
@kimchilover Thank you, that is the kind of suggestion I'm looking for.
– eternalGoldenBraid
Nov 26 at 3:59
1
The correct statement is: $f(x, y) = f_X(x) f_Y(y)$ holds for almost every $x,y$ (with respect to 2-dimensional Lebesgue measure).
– Song
Nov 26 at 7:47
add a comment |
1
You could say that (1) holds for all $(x,y)in U{times} V$, for any open $U$ and $V$ for which $f_X$, $f_Y$, and $f$ are continuous on $U$, $V$, and $U{times} V$ respectively. Most density functions seen in undergraduate courses are continuous, so you are not losing many examples this way.
– kimchi lover
Nov 26 at 2:19
@kimchilover Thank you, that is the kind of suggestion I'm looking for.
– eternalGoldenBraid
Nov 26 at 3:59
1
The correct statement is: $f(x, y) = f_X(x) f_Y(y)$ holds for almost every $x,y$ (with respect to 2-dimensional Lebesgue measure).
– Song
Nov 26 at 7:47
1
1
You could say that (1) holds for all $(x,y)in U{times} V$, for any open $U$ and $V$ for which $f_X$, $f_Y$, and $f$ are continuous on $U$, $V$, and $U{times} V$ respectively. Most density functions seen in undergraduate courses are continuous, so you are not losing many examples this way.
– kimchi lover
Nov 26 at 2:19
You could say that (1) holds for all $(x,y)in U{times} V$, for any open $U$ and $V$ for which $f_X$, $f_Y$, and $f$ are continuous on $U$, $V$, and $U{times} V$ respectively. Most density functions seen in undergraduate courses are continuous, so you are not losing many examples this way.
– kimchi lover
Nov 26 at 2:19
@kimchilover Thank you, that is the kind of suggestion I'm looking for.
– eternalGoldenBraid
Nov 26 at 3:59
@kimchilover Thank you, that is the kind of suggestion I'm looking for.
– eternalGoldenBraid
Nov 26 at 3:59
1
1
The correct statement is: $f(x, y) = f_X(x) f_Y(y)$ holds for almost every $x,y$ (with respect to 2-dimensional Lebesgue measure).
– Song
Nov 26 at 7:47
The correct statement is: $f(x, y) = f_X(x) f_Y(y)$ holds for almost every $x,y$ (with respect to 2-dimensional Lebesgue measure).
– Song
Nov 26 at 7:47
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3013677%2fclarifying-the-statement-fx-y-f-xx-f-yy-for-all-x-y-when-x-and%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
You could say that (1) holds for all $(x,y)in U{times} V$, for any open $U$ and $V$ for which $f_X$, $f_Y$, and $f$ are continuous on $U$, $V$, and $U{times} V$ respectively. Most density functions seen in undergraduate courses are continuous, so you are not losing many examples this way.
– kimchi lover
Nov 26 at 2:19
@kimchilover Thank you, that is the kind of suggestion I'm looking for.
– eternalGoldenBraid
Nov 26 at 3:59
1
The correct statement is: $f(x, y) = f_X(x) f_Y(y)$ holds for almost every $x,y$ (with respect to 2-dimensional Lebesgue measure).
– Song
Nov 26 at 7:47