Stationary distribution of Cox-Ingersoll-Ross process
up vote
2
down vote
favorite
I am uncertain how to go about the following problem from the lecture notes on a course in SDE's. We are given the following SDE.
$dX_t=lambdaleft(xi-X_tright) dt+gammasqrt{|X_t|}dB_t$
Where $lambda,xi,gamma$ are positive constants. Show that $X_t$ has Gamma distribution, with a rate parameter $omega=2lambda/gamma^2$ and shape parameter $nu=2lambdaxi/gamma^2$. Finally, what is the mean and variance in stationarity?
I know I have to find the stationary distribution by isolating $phi$ in the following.
$-nablacdotleft( uphi-Dnabla phiright)=0$
Inserting $D=frac{1}{2}gamma^2|X_t|, $ $u=f-nabla D=lambdaleft(xi-X_tright)-frac{1}{2}gamma^2frac{X_t}{|X_t|}$, I get an ugly expression, involving second derivative of absolute value of the stochastic process. Is there a mistake so far? (I take the gradient and divergence w.r.t $x=X_t$)
probability-theory stochastic-processes stochastic-calculus sde
add a comment |
up vote
2
down vote
favorite
I am uncertain how to go about the following problem from the lecture notes on a course in SDE's. We are given the following SDE.
$dX_t=lambdaleft(xi-X_tright) dt+gammasqrt{|X_t|}dB_t$
Where $lambda,xi,gamma$ are positive constants. Show that $X_t$ has Gamma distribution, with a rate parameter $omega=2lambda/gamma^2$ and shape parameter $nu=2lambdaxi/gamma^2$. Finally, what is the mean and variance in stationarity?
I know I have to find the stationary distribution by isolating $phi$ in the following.
$-nablacdotleft( uphi-Dnabla phiright)=0$
Inserting $D=frac{1}{2}gamma^2|X_t|, $ $u=f-nabla D=lambdaleft(xi-X_tright)-frac{1}{2}gamma^2frac{X_t}{|X_t|}$, I get an ugly expression, involving second derivative of absolute value of the stochastic process. Is there a mistake so far? (I take the gradient and divergence w.r.t $x=X_t$)
probability-theory stochastic-processes stochastic-calculus sde
add a comment |
up vote
2
down vote
favorite
up vote
2
down vote
favorite
I am uncertain how to go about the following problem from the lecture notes on a course in SDE's. We are given the following SDE.
$dX_t=lambdaleft(xi-X_tright) dt+gammasqrt{|X_t|}dB_t$
Where $lambda,xi,gamma$ are positive constants. Show that $X_t$ has Gamma distribution, with a rate parameter $omega=2lambda/gamma^2$ and shape parameter $nu=2lambdaxi/gamma^2$. Finally, what is the mean and variance in stationarity?
I know I have to find the stationary distribution by isolating $phi$ in the following.
$-nablacdotleft( uphi-Dnabla phiright)=0$
Inserting $D=frac{1}{2}gamma^2|X_t|, $ $u=f-nabla D=lambdaleft(xi-X_tright)-frac{1}{2}gamma^2frac{X_t}{|X_t|}$, I get an ugly expression, involving second derivative of absolute value of the stochastic process. Is there a mistake so far? (I take the gradient and divergence w.r.t $x=X_t$)
probability-theory stochastic-processes stochastic-calculus sde
I am uncertain how to go about the following problem from the lecture notes on a course in SDE's. We are given the following SDE.
$dX_t=lambdaleft(xi-X_tright) dt+gammasqrt{|X_t|}dB_t$
Where $lambda,xi,gamma$ are positive constants. Show that $X_t$ has Gamma distribution, with a rate parameter $omega=2lambda/gamma^2$ and shape parameter $nu=2lambdaxi/gamma^2$. Finally, what is the mean and variance in stationarity?
I know I have to find the stationary distribution by isolating $phi$ in the following.
$-nablacdotleft( uphi-Dnabla phiright)=0$
Inserting $D=frac{1}{2}gamma^2|X_t|, $ $u=f-nabla D=lambdaleft(xi-X_tright)-frac{1}{2}gamma^2frac{X_t}{|X_t|}$, I get an ugly expression, involving second derivative of absolute value of the stochastic process. Is there a mistake so far? (I take the gradient and divergence w.r.t $x=X_t$)
probability-theory stochastic-processes stochastic-calculus sde
probability-theory stochastic-processes stochastic-calculus sde
asked Nov 28 at 0:41
thaumoctopus
11117
11117
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
From the Kolmogorov forward equation,
$$
frac{d^2}{dy^2}left(frac{1}{2}gamma^2yphi(y)right)=frac{d}{dy}left(lambda(xi-y)phi(y)right).
tag{1}label{eq:asdf}
$$
The left-hand side is equal to
$$
frac{d}{dy}left(frac{1}{2}gamma^2phi(y)+frac{1}{2}gamma^2yphi'(y)right).
$$
So integrating eqref{eq:asdf} from $0$ to $y$ we get
$$
frac{1}{2}gamma^2phi(y)+frac{1}{2}gamma^2yphi'(y)=lambda(xi-y)phi(y).
$$
Here, I've used the fact that $phi(0)=0$ (if $X$ hits zero, it'll instantly get reflected). [$phi(0)$ is not necessarily zero. See below.] Rearranging the last equation,
$$
frac{phi'(y)}{phi(y)}
=
frac{lambda(xi-y)-frac{1}{2}gamma^2}{frac{1}{2}gamma^2y}
=
left(frac{2lambdaxi}{gamma^2}-1right)frac{1}{y}-frac{2lambda}{gamma^2}.
$$
Integrating this from $xi$ to $y$,
$$
logfrac{phi(y)}{phi(xi)}
= left(frac{2lambdaxi}{gamma^2}-1right)logfrac{y}{xi} - frac{2lambda}{gamma^2}(y-xi).
$$
So, finally,
$$
phi(y)propto y^{frac{2lambdaxi}{gamma^2}-1}e^{- frac{2lambda}{gamma^2}y},
$$
which is the pdf of a Gamma distribution.
The mean and variance are accordingly given by
$$
frac{2lambdaxi}{gamma^2}frac{gamma^2}{2lambda}=xi
quadtext{and}quad
frac{2lambdaxi}{gamma^2}left(frac{gamma^2}{2lambda}right)^2
=frac{gamma^2xi}{2lambda}.
$$
Addendum
If $frac{2lambdaxi}{gamma^2}le 1$, then $phi(0)ne 0$; and if $frac{2lambdaxi}{gamma^2}< 1$, then $phi'(0)=-infty$, contrary to what I assumed above. In any case, these assumptions are not necessary. Here's the correction.
We still have
$$
frac{1}{2}gamma^2phi(y)+frac{1}{2}gamma^2yphi'(y)
-lambda(xi-y)phi(y)=text{constant}.
$$
Since $phi$ is a pdf defined on $[0,infty)$, $phi(infty)=0$ and $phi'(infty)=0$. Also, if $X_t$ is integrable with respect to the stationary distribution (which I will assume), then $yphi(y)rightarrow 0$ and $yphi'(y)rightarrow 0$ as $yrightarrow infty$. It follows that the constant in the last equation is zero, which brings us back to the derivation above.
Thank you so much.
– thaumoctopus
Nov 28 at 21:11
@thaumoctopus You're welcome ;)
– AddSup
Nov 29 at 4:05
@thaumoctopus I fixed a sloppy step in the derivation.
– AddSup
Nov 30 at 12:27
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3016541%2fstationary-distribution-of-cox-ingersoll-ross-process%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
From the Kolmogorov forward equation,
$$
frac{d^2}{dy^2}left(frac{1}{2}gamma^2yphi(y)right)=frac{d}{dy}left(lambda(xi-y)phi(y)right).
tag{1}label{eq:asdf}
$$
The left-hand side is equal to
$$
frac{d}{dy}left(frac{1}{2}gamma^2phi(y)+frac{1}{2}gamma^2yphi'(y)right).
$$
So integrating eqref{eq:asdf} from $0$ to $y$ we get
$$
frac{1}{2}gamma^2phi(y)+frac{1}{2}gamma^2yphi'(y)=lambda(xi-y)phi(y).
$$
Here, I've used the fact that $phi(0)=0$ (if $X$ hits zero, it'll instantly get reflected). [$phi(0)$ is not necessarily zero. See below.] Rearranging the last equation,
$$
frac{phi'(y)}{phi(y)}
=
frac{lambda(xi-y)-frac{1}{2}gamma^2}{frac{1}{2}gamma^2y}
=
left(frac{2lambdaxi}{gamma^2}-1right)frac{1}{y}-frac{2lambda}{gamma^2}.
$$
Integrating this from $xi$ to $y$,
$$
logfrac{phi(y)}{phi(xi)}
= left(frac{2lambdaxi}{gamma^2}-1right)logfrac{y}{xi} - frac{2lambda}{gamma^2}(y-xi).
$$
So, finally,
$$
phi(y)propto y^{frac{2lambdaxi}{gamma^2}-1}e^{- frac{2lambda}{gamma^2}y},
$$
which is the pdf of a Gamma distribution.
The mean and variance are accordingly given by
$$
frac{2lambdaxi}{gamma^2}frac{gamma^2}{2lambda}=xi
quadtext{and}quad
frac{2lambdaxi}{gamma^2}left(frac{gamma^2}{2lambda}right)^2
=frac{gamma^2xi}{2lambda}.
$$
Addendum
If $frac{2lambdaxi}{gamma^2}le 1$, then $phi(0)ne 0$; and if $frac{2lambdaxi}{gamma^2}< 1$, then $phi'(0)=-infty$, contrary to what I assumed above. In any case, these assumptions are not necessary. Here's the correction.
We still have
$$
frac{1}{2}gamma^2phi(y)+frac{1}{2}gamma^2yphi'(y)
-lambda(xi-y)phi(y)=text{constant}.
$$
Since $phi$ is a pdf defined on $[0,infty)$, $phi(infty)=0$ and $phi'(infty)=0$. Also, if $X_t$ is integrable with respect to the stationary distribution (which I will assume), then $yphi(y)rightarrow 0$ and $yphi'(y)rightarrow 0$ as $yrightarrow infty$. It follows that the constant in the last equation is zero, which brings us back to the derivation above.
Thank you so much.
– thaumoctopus
Nov 28 at 21:11
@thaumoctopus You're welcome ;)
– AddSup
Nov 29 at 4:05
@thaumoctopus I fixed a sloppy step in the derivation.
– AddSup
Nov 30 at 12:27
add a comment |
up vote
1
down vote
accepted
From the Kolmogorov forward equation,
$$
frac{d^2}{dy^2}left(frac{1}{2}gamma^2yphi(y)right)=frac{d}{dy}left(lambda(xi-y)phi(y)right).
tag{1}label{eq:asdf}
$$
The left-hand side is equal to
$$
frac{d}{dy}left(frac{1}{2}gamma^2phi(y)+frac{1}{2}gamma^2yphi'(y)right).
$$
So integrating eqref{eq:asdf} from $0$ to $y$ we get
$$
frac{1}{2}gamma^2phi(y)+frac{1}{2}gamma^2yphi'(y)=lambda(xi-y)phi(y).
$$
Here, I've used the fact that $phi(0)=0$ (if $X$ hits zero, it'll instantly get reflected). [$phi(0)$ is not necessarily zero. See below.] Rearranging the last equation,
$$
frac{phi'(y)}{phi(y)}
=
frac{lambda(xi-y)-frac{1}{2}gamma^2}{frac{1}{2}gamma^2y}
=
left(frac{2lambdaxi}{gamma^2}-1right)frac{1}{y}-frac{2lambda}{gamma^2}.
$$
Integrating this from $xi$ to $y$,
$$
logfrac{phi(y)}{phi(xi)}
= left(frac{2lambdaxi}{gamma^2}-1right)logfrac{y}{xi} - frac{2lambda}{gamma^2}(y-xi).
$$
So, finally,
$$
phi(y)propto y^{frac{2lambdaxi}{gamma^2}-1}e^{- frac{2lambda}{gamma^2}y},
$$
which is the pdf of a Gamma distribution.
The mean and variance are accordingly given by
$$
frac{2lambdaxi}{gamma^2}frac{gamma^2}{2lambda}=xi
quadtext{and}quad
frac{2lambdaxi}{gamma^2}left(frac{gamma^2}{2lambda}right)^2
=frac{gamma^2xi}{2lambda}.
$$
Addendum
If $frac{2lambdaxi}{gamma^2}le 1$, then $phi(0)ne 0$; and if $frac{2lambdaxi}{gamma^2}< 1$, then $phi'(0)=-infty$, contrary to what I assumed above. In any case, these assumptions are not necessary. Here's the correction.
We still have
$$
frac{1}{2}gamma^2phi(y)+frac{1}{2}gamma^2yphi'(y)
-lambda(xi-y)phi(y)=text{constant}.
$$
Since $phi$ is a pdf defined on $[0,infty)$, $phi(infty)=0$ and $phi'(infty)=0$. Also, if $X_t$ is integrable with respect to the stationary distribution (which I will assume), then $yphi(y)rightarrow 0$ and $yphi'(y)rightarrow 0$ as $yrightarrow infty$. It follows that the constant in the last equation is zero, which brings us back to the derivation above.
Thank you so much.
– thaumoctopus
Nov 28 at 21:11
@thaumoctopus You're welcome ;)
– AddSup
Nov 29 at 4:05
@thaumoctopus I fixed a sloppy step in the derivation.
– AddSup
Nov 30 at 12:27
add a comment |
up vote
1
down vote
accepted
up vote
1
down vote
accepted
From the Kolmogorov forward equation,
$$
frac{d^2}{dy^2}left(frac{1}{2}gamma^2yphi(y)right)=frac{d}{dy}left(lambda(xi-y)phi(y)right).
tag{1}label{eq:asdf}
$$
The left-hand side is equal to
$$
frac{d}{dy}left(frac{1}{2}gamma^2phi(y)+frac{1}{2}gamma^2yphi'(y)right).
$$
So integrating eqref{eq:asdf} from $0$ to $y$ we get
$$
frac{1}{2}gamma^2phi(y)+frac{1}{2}gamma^2yphi'(y)=lambda(xi-y)phi(y).
$$
Here, I've used the fact that $phi(0)=0$ (if $X$ hits zero, it'll instantly get reflected). [$phi(0)$ is not necessarily zero. See below.] Rearranging the last equation,
$$
frac{phi'(y)}{phi(y)}
=
frac{lambda(xi-y)-frac{1}{2}gamma^2}{frac{1}{2}gamma^2y}
=
left(frac{2lambdaxi}{gamma^2}-1right)frac{1}{y}-frac{2lambda}{gamma^2}.
$$
Integrating this from $xi$ to $y$,
$$
logfrac{phi(y)}{phi(xi)}
= left(frac{2lambdaxi}{gamma^2}-1right)logfrac{y}{xi} - frac{2lambda}{gamma^2}(y-xi).
$$
So, finally,
$$
phi(y)propto y^{frac{2lambdaxi}{gamma^2}-1}e^{- frac{2lambda}{gamma^2}y},
$$
which is the pdf of a Gamma distribution.
The mean and variance are accordingly given by
$$
frac{2lambdaxi}{gamma^2}frac{gamma^2}{2lambda}=xi
quadtext{and}quad
frac{2lambdaxi}{gamma^2}left(frac{gamma^2}{2lambda}right)^2
=frac{gamma^2xi}{2lambda}.
$$
Addendum
If $frac{2lambdaxi}{gamma^2}le 1$, then $phi(0)ne 0$; and if $frac{2lambdaxi}{gamma^2}< 1$, then $phi'(0)=-infty$, contrary to what I assumed above. In any case, these assumptions are not necessary. Here's the correction.
We still have
$$
frac{1}{2}gamma^2phi(y)+frac{1}{2}gamma^2yphi'(y)
-lambda(xi-y)phi(y)=text{constant}.
$$
Since $phi$ is a pdf defined on $[0,infty)$, $phi(infty)=0$ and $phi'(infty)=0$. Also, if $X_t$ is integrable with respect to the stationary distribution (which I will assume), then $yphi(y)rightarrow 0$ and $yphi'(y)rightarrow 0$ as $yrightarrow infty$. It follows that the constant in the last equation is zero, which brings us back to the derivation above.
From the Kolmogorov forward equation,
$$
frac{d^2}{dy^2}left(frac{1}{2}gamma^2yphi(y)right)=frac{d}{dy}left(lambda(xi-y)phi(y)right).
tag{1}label{eq:asdf}
$$
The left-hand side is equal to
$$
frac{d}{dy}left(frac{1}{2}gamma^2phi(y)+frac{1}{2}gamma^2yphi'(y)right).
$$
So integrating eqref{eq:asdf} from $0$ to $y$ we get
$$
frac{1}{2}gamma^2phi(y)+frac{1}{2}gamma^2yphi'(y)=lambda(xi-y)phi(y).
$$
Here, I've used the fact that $phi(0)=0$ (if $X$ hits zero, it'll instantly get reflected). [$phi(0)$ is not necessarily zero. See below.] Rearranging the last equation,
$$
frac{phi'(y)}{phi(y)}
=
frac{lambda(xi-y)-frac{1}{2}gamma^2}{frac{1}{2}gamma^2y}
=
left(frac{2lambdaxi}{gamma^2}-1right)frac{1}{y}-frac{2lambda}{gamma^2}.
$$
Integrating this from $xi$ to $y$,
$$
logfrac{phi(y)}{phi(xi)}
= left(frac{2lambdaxi}{gamma^2}-1right)logfrac{y}{xi} - frac{2lambda}{gamma^2}(y-xi).
$$
So, finally,
$$
phi(y)propto y^{frac{2lambdaxi}{gamma^2}-1}e^{- frac{2lambda}{gamma^2}y},
$$
which is the pdf of a Gamma distribution.
The mean and variance are accordingly given by
$$
frac{2lambdaxi}{gamma^2}frac{gamma^2}{2lambda}=xi
quadtext{and}quad
frac{2lambdaxi}{gamma^2}left(frac{gamma^2}{2lambda}right)^2
=frac{gamma^2xi}{2lambda}.
$$
Addendum
If $frac{2lambdaxi}{gamma^2}le 1$, then $phi(0)ne 0$; and if $frac{2lambdaxi}{gamma^2}< 1$, then $phi'(0)=-infty$, contrary to what I assumed above. In any case, these assumptions are not necessary. Here's the correction.
We still have
$$
frac{1}{2}gamma^2phi(y)+frac{1}{2}gamma^2yphi'(y)
-lambda(xi-y)phi(y)=text{constant}.
$$
Since $phi$ is a pdf defined on $[0,infty)$, $phi(infty)=0$ and $phi'(infty)=0$. Also, if $X_t$ is integrable with respect to the stationary distribution (which I will assume), then $yphi(y)rightarrow 0$ and $yphi'(y)rightarrow 0$ as $yrightarrow infty$. It follows that the constant in the last equation is zero, which brings us back to the derivation above.
edited Nov 30 at 12:21
answered Nov 28 at 14:48
AddSup
38229
38229
Thank you so much.
– thaumoctopus
Nov 28 at 21:11
@thaumoctopus You're welcome ;)
– AddSup
Nov 29 at 4:05
@thaumoctopus I fixed a sloppy step in the derivation.
– AddSup
Nov 30 at 12:27
add a comment |
Thank you so much.
– thaumoctopus
Nov 28 at 21:11
@thaumoctopus You're welcome ;)
– AddSup
Nov 29 at 4:05
@thaumoctopus I fixed a sloppy step in the derivation.
– AddSup
Nov 30 at 12:27
Thank you so much.
– thaumoctopus
Nov 28 at 21:11
Thank you so much.
– thaumoctopus
Nov 28 at 21:11
@thaumoctopus You're welcome ;)
– AddSup
Nov 29 at 4:05
@thaumoctopus You're welcome ;)
– AddSup
Nov 29 at 4:05
@thaumoctopus I fixed a sloppy step in the derivation.
– AddSup
Nov 30 at 12:27
@thaumoctopus I fixed a sloppy step in the derivation.
– AddSup
Nov 30 at 12:27
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3016541%2fstationary-distribution-of-cox-ingersoll-ross-process%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown