(Why) can we treat a function of a variable as another independent variable?
up vote
6
down vote
favorite
I'm currently reading my numerical analysis textbook and something's bugging me. To get into it, let's take a look at the following differential equation;
$$u'(x) = f(x, u(x))$$
In order to determine the stability of the equation, one may calculate the Jacobian,
$$J(x, u(x)) = frac{partial f}{partial u}|_{(x, u(x))}$$
Here is a specific differential equation:
$$u'(x) = -alpha(u(x) - sin(x)) + cos(x)$$
For which the Jacobian is
$$J(x, u(x)) = -alpha$$
Basically, we treated both $sin(x)$ and $cos(x)$ as constants with respect to $u$, but I don't really understand why. Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable $x$.
This means that the "rate of change of $sin(x)$ with respect to $u(x)$" is zero, but the value of $u(x)$ only changes if the value of x itself changes, so shouldn't the value of $sin(x)$ change aswell?
Thank you!
multivariable-calculus derivatives numerical-methods jacobian numerical-calculus
add a comment |
up vote
6
down vote
favorite
I'm currently reading my numerical analysis textbook and something's bugging me. To get into it, let's take a look at the following differential equation;
$$u'(x) = f(x, u(x))$$
In order to determine the stability of the equation, one may calculate the Jacobian,
$$J(x, u(x)) = frac{partial f}{partial u}|_{(x, u(x))}$$
Here is a specific differential equation:
$$u'(x) = -alpha(u(x) - sin(x)) + cos(x)$$
For which the Jacobian is
$$J(x, u(x)) = -alpha$$
Basically, we treated both $sin(x)$ and $cos(x)$ as constants with respect to $u$, but I don't really understand why. Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable $x$.
This means that the "rate of change of $sin(x)$ with respect to $u(x)$" is zero, but the value of $u(x)$ only changes if the value of x itself changes, so shouldn't the value of $sin(x)$ change aswell?
Thank you!
multivariable-calculus derivatives numerical-methods jacobian numerical-calculus
"Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable x." I don't understand. When you take the derivative of a function with one variable there is only one variable and you don't treat the variables as independent. As $sin$ and $cos$ are both dependent on one variable this is a function with one variable. Can you give an example of a case where "take a derivative the variables are independant"
– fleablood
Nov 23 at 22:02
1
"This means that the "rate of change of sin(x) with respect to u(x)" is zero, but the value of u(x) only changes if the value of x itself changes, so shouldn't the value of sin(x) change as well?" Of course it does. But that is with respect to $x$ and not with respect to $u(x)$.
– fleablood
Nov 23 at 22:08
@fleablood Thanks for your answers, I thought because both sin(x) and u(x) depend on x, that sin(x) would also depend on u(x) but as I read this it makes sense that these two functions are not linked to each other although they both depend on the same variable
– FredV
Nov 23 at 22:27
1
When you compute the linear expansion, you do it for the function $f:Bbb R^{1+n}toBbb R^n$. You get $f(x,u)approx f(x, u_0)+∂_uf(x,u_0)(u-u_0)$. Then you approximate the ODE $u'(x)=f(x,u(x))$ with the linearization of the right side. Further using $u=u_0+v$, this reads as $v'(x)=f(x,u_0)+∂_uf(x,u_0)v(x)$.
– LutzL
Nov 24 at 15:11
add a comment |
up vote
6
down vote
favorite
up vote
6
down vote
favorite
I'm currently reading my numerical analysis textbook and something's bugging me. To get into it, let's take a look at the following differential equation;
$$u'(x) = f(x, u(x))$$
In order to determine the stability of the equation, one may calculate the Jacobian,
$$J(x, u(x)) = frac{partial f}{partial u}|_{(x, u(x))}$$
Here is a specific differential equation:
$$u'(x) = -alpha(u(x) - sin(x)) + cos(x)$$
For which the Jacobian is
$$J(x, u(x)) = -alpha$$
Basically, we treated both $sin(x)$ and $cos(x)$ as constants with respect to $u$, but I don't really understand why. Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable $x$.
This means that the "rate of change of $sin(x)$ with respect to $u(x)$" is zero, but the value of $u(x)$ only changes if the value of x itself changes, so shouldn't the value of $sin(x)$ change aswell?
Thank you!
multivariable-calculus derivatives numerical-methods jacobian numerical-calculus
I'm currently reading my numerical analysis textbook and something's bugging me. To get into it, let's take a look at the following differential equation;
$$u'(x) = f(x, u(x))$$
In order to determine the stability of the equation, one may calculate the Jacobian,
$$J(x, u(x)) = frac{partial f}{partial u}|_{(x, u(x))}$$
Here is a specific differential equation:
$$u'(x) = -alpha(u(x) - sin(x)) + cos(x)$$
For which the Jacobian is
$$J(x, u(x)) = -alpha$$
Basically, we treated both $sin(x)$ and $cos(x)$ as constants with respect to $u$, but I don't really understand why. Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable $x$.
This means that the "rate of change of $sin(x)$ with respect to $u(x)$" is zero, but the value of $u(x)$ only changes if the value of x itself changes, so shouldn't the value of $sin(x)$ change aswell?
Thank you!
multivariable-calculus derivatives numerical-methods jacobian numerical-calculus
multivariable-calculus derivatives numerical-methods jacobian numerical-calculus
edited Nov 24 at 21:41
Ethan Bolker
39.7k543103
39.7k543103
asked Nov 23 at 21:36
FredV
605
605
"Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable x." I don't understand. When you take the derivative of a function with one variable there is only one variable and you don't treat the variables as independent. As $sin$ and $cos$ are both dependent on one variable this is a function with one variable. Can you give an example of a case where "take a derivative the variables are independant"
– fleablood
Nov 23 at 22:02
1
"This means that the "rate of change of sin(x) with respect to u(x)" is zero, but the value of u(x) only changes if the value of x itself changes, so shouldn't the value of sin(x) change as well?" Of course it does. But that is with respect to $x$ and not with respect to $u(x)$.
– fleablood
Nov 23 at 22:08
@fleablood Thanks for your answers, I thought because both sin(x) and u(x) depend on x, that sin(x) would also depend on u(x) but as I read this it makes sense that these two functions are not linked to each other although they both depend on the same variable
– FredV
Nov 23 at 22:27
1
When you compute the linear expansion, you do it for the function $f:Bbb R^{1+n}toBbb R^n$. You get $f(x,u)approx f(x, u_0)+∂_uf(x,u_0)(u-u_0)$. Then you approximate the ODE $u'(x)=f(x,u(x))$ with the linearization of the right side. Further using $u=u_0+v$, this reads as $v'(x)=f(x,u_0)+∂_uf(x,u_0)v(x)$.
– LutzL
Nov 24 at 15:11
add a comment |
"Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable x." I don't understand. When you take the derivative of a function with one variable there is only one variable and you don't treat the variables as independent. As $sin$ and $cos$ are both dependent on one variable this is a function with one variable. Can you give an example of a case where "take a derivative the variables are independant"
– fleablood
Nov 23 at 22:02
1
"This means that the "rate of change of sin(x) with respect to u(x)" is zero, but the value of u(x) only changes if the value of x itself changes, so shouldn't the value of sin(x) change as well?" Of course it does. But that is with respect to $x$ and not with respect to $u(x)$.
– fleablood
Nov 23 at 22:08
@fleablood Thanks for your answers, I thought because both sin(x) and u(x) depend on x, that sin(x) would also depend on u(x) but as I read this it makes sense that these two functions are not linked to each other although they both depend on the same variable
– FredV
Nov 23 at 22:27
1
When you compute the linear expansion, you do it for the function $f:Bbb R^{1+n}toBbb R^n$. You get $f(x,u)approx f(x, u_0)+∂_uf(x,u_0)(u-u_0)$. Then you approximate the ODE $u'(x)=f(x,u(x))$ with the linearization of the right side. Further using $u=u_0+v$, this reads as $v'(x)=f(x,u_0)+∂_uf(x,u_0)v(x)$.
– LutzL
Nov 24 at 15:11
"Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable x." I don't understand. When you take the derivative of a function with one variable there is only one variable and you don't treat the variables as independent. As $sin$ and $cos$ are both dependent on one variable this is a function with one variable. Can you give an example of a case where "take a derivative the variables are independant"
– fleablood
Nov 23 at 22:02
"Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable x." I don't understand. When you take the derivative of a function with one variable there is only one variable and you don't treat the variables as independent. As $sin$ and $cos$ are both dependent on one variable this is a function with one variable. Can you give an example of a case where "take a derivative the variables are independant"
– fleablood
Nov 23 at 22:02
1
1
"This means that the "rate of change of sin(x) with respect to u(x)" is zero, but the value of u(x) only changes if the value of x itself changes, so shouldn't the value of sin(x) change as well?" Of course it does. But that is with respect to $x$ and not with respect to $u(x)$.
– fleablood
Nov 23 at 22:08
"This means that the "rate of change of sin(x) with respect to u(x)" is zero, but the value of u(x) only changes if the value of x itself changes, so shouldn't the value of sin(x) change as well?" Of course it does. But that is with respect to $x$ and not with respect to $u(x)$.
– fleablood
Nov 23 at 22:08
@fleablood Thanks for your answers, I thought because both sin(x) and u(x) depend on x, that sin(x) would also depend on u(x) but as I read this it makes sense that these two functions are not linked to each other although they both depend on the same variable
– FredV
Nov 23 at 22:27
@fleablood Thanks for your answers, I thought because both sin(x) and u(x) depend on x, that sin(x) would also depend on u(x) but as I read this it makes sense that these two functions are not linked to each other although they both depend on the same variable
– FredV
Nov 23 at 22:27
1
1
When you compute the linear expansion, you do it for the function $f:Bbb R^{1+n}toBbb R^n$. You get $f(x,u)approx f(x, u_0)+∂_uf(x,u_0)(u-u_0)$. Then you approximate the ODE $u'(x)=f(x,u(x))$ with the linearization of the right side. Further using $u=u_0+v$, this reads as $v'(x)=f(x,u_0)+∂_uf(x,u_0)v(x)$.
– LutzL
Nov 24 at 15:11
When you compute the linear expansion, you do it for the function $f:Bbb R^{1+n}toBbb R^n$. You get $f(x,u)approx f(x, u_0)+∂_uf(x,u_0)(u-u_0)$. Then you approximate the ODE $u'(x)=f(x,u(x))$ with the linearization of the right side. Further using $u=u_0+v$, this reads as $v'(x)=f(x,u_0)+∂_uf(x,u_0)v(x)$.
– LutzL
Nov 24 at 15:11
add a comment |
2 Answers
2
active
oldest
votes
up vote
3
down vote
accepted
There is a difference between the partial derivative $frac{partial}{partial x}$ and the total derivative $frac{d}{dx}$. For example, if we have variables $(u,x)$ and the equation $f=f(x,u(x))=x^2+u^3$ and we take the partial derivative we get $frac{partial f}{partial x}=2x$ but if we take the total derivative we get $frac{d f}{dx}=2x+3u^2frac{partial u}{partial x}$, applying the chain rule. This distintion is a key point in classical mechanics for example and captures essentially what you are asking.
See: What exactly is the difference between a derivative and a total derivative?
add a comment |
up vote
2
down vote
To make things simpler, imagine a very simple autonomous dynamical system
$$
frac{{rm d}u}{{rm d}x} = alpha (u - u_0) tag{1}
$$
for some constants $alpha$ and $u_0$. The solutions to this system are of the form
$$
u(x) - u_0 = ce^{alpha x} tag{2}
$$
The interesting thing to note here is that if $alpha > 0$ then the distance between $u(x)$ and $u_0$ grows exponentially with increasing $x$, so $u = u_0$ is said to be unstable. Whereas if $alpha < 0$ the opposite occurs, and the distance between $u$ and $u_0$ shrinks with increasing $x$, in this case $u = u_0$ is stable.
Now let's make things a bit more general. Imagine a system of the form
$$
frac{{rm d}u}{{rm d}x} = f(u) tag{3}
$$
and suppose there exists a point $u_0$ such that $f(u_0) = 0$ (like in Eq. (1)), if you want to understand the local stability of (3) you could Taylor expand $f$ around $u_0$
$$
f(u) = f(u_0) + left.frac{{rm d}f}{{rm d}u}right|_{u = u_0}(u - u_0) + cdots tag{4}
$$
Remember that $f(u_0) = 0$, so at first order $f(u)approx f'(u_0)(u - u_0)$, and Eq. (3)
$$
frac{{rm d}u}{{rm d}x} approx underbrace{f'(u_0)}_{alpha}(u- u_0) tag{5}
$$
Now compare this with Eq. (1) and you realize that in order to understand the stability of the system around the point $u_0$ you need to know the value of $f'(u_0) $ a.k.a the Jacobian. There's a lot of caveats here you should be aware of, probably your text talks about them (e.g. what happens if $f'(u_0) = 0$, ...)
EDIT
Now imagine a system in two dimensions, something like
$$
frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = g(u, v)tag{6}
$$
You can define vectors ${bf z} = {u choose v}$ and ${bf F} = {f choose g}$ so that the system above can be written as
$$
frac{{rm d}{bf z}}{{rm d}x} = {bf F}({bf z}) tag{7}
$$
In this case nothing changes much, you can repeat the same analysis as in the first part and realize that the stability of the system around a point ${bf z} = {bf z}_0$ is given by the eigenvalues of the Jacobian evaluated at that location. And as before we require that $color{blue}{{bf F}({bf z}_0) = 0}$. I highlight this condition because it will become important later.
Now to the final part. Instead of an autonomous system, consider a system of the form
$$
frac{{rm d}u}{{rm d}x} = f(x, u) tag{8}
$$
You could rename $v = x$ (that is, create a new state), and note that
$$
frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = 1 tag{9}
$$
So in theory you could repeat the same analysis all over again, but, you can see from this that the resulting field ${bf F}$ never vanishes (that is, the blue expression above can never be satisfied)
Thanks for the detailed answer! It's exactly the stuff I'm reading right now. However my question was not so much about stability but more about what it meant to take the derivative of sin(x) and cos(x) with respect to u
– FredV
Nov 23 at 22:30
@FredV Sorry, I extended my answer, hopefully it makes more sense now. Otherwise I will gladly delete it
– caverac
Nov 24 at 9:43
add a comment |
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
accepted
There is a difference between the partial derivative $frac{partial}{partial x}$ and the total derivative $frac{d}{dx}$. For example, if we have variables $(u,x)$ and the equation $f=f(x,u(x))=x^2+u^3$ and we take the partial derivative we get $frac{partial f}{partial x}=2x$ but if we take the total derivative we get $frac{d f}{dx}=2x+3u^2frac{partial u}{partial x}$, applying the chain rule. This distintion is a key point in classical mechanics for example and captures essentially what you are asking.
See: What exactly is the difference between a derivative and a total derivative?
add a comment |
up vote
3
down vote
accepted
There is a difference between the partial derivative $frac{partial}{partial x}$ and the total derivative $frac{d}{dx}$. For example, if we have variables $(u,x)$ and the equation $f=f(x,u(x))=x^2+u^3$ and we take the partial derivative we get $frac{partial f}{partial x}=2x$ but if we take the total derivative we get $frac{d f}{dx}=2x+3u^2frac{partial u}{partial x}$, applying the chain rule. This distintion is a key point in classical mechanics for example and captures essentially what you are asking.
See: What exactly is the difference between a derivative and a total derivative?
add a comment |
up vote
3
down vote
accepted
up vote
3
down vote
accepted
There is a difference between the partial derivative $frac{partial}{partial x}$ and the total derivative $frac{d}{dx}$. For example, if we have variables $(u,x)$ and the equation $f=f(x,u(x))=x^2+u^3$ and we take the partial derivative we get $frac{partial f}{partial x}=2x$ but if we take the total derivative we get $frac{d f}{dx}=2x+3u^2frac{partial u}{partial x}$, applying the chain rule. This distintion is a key point in classical mechanics for example and captures essentially what you are asking.
See: What exactly is the difference between a derivative and a total derivative?
There is a difference between the partial derivative $frac{partial}{partial x}$ and the total derivative $frac{d}{dx}$. For example, if we have variables $(u,x)$ and the equation $f=f(x,u(x))=x^2+u^3$ and we take the partial derivative we get $frac{partial f}{partial x}=2x$ but if we take the total derivative we get $frac{d f}{dx}=2x+3u^2frac{partial u}{partial x}$, applying the chain rule. This distintion is a key point in classical mechanics for example and captures essentially what you are asking.
See: What exactly is the difference between a derivative and a total derivative?
edited Nov 24 at 21:33
answered Nov 24 at 15:19
Dante Grevino
7367
7367
add a comment |
add a comment |
up vote
2
down vote
To make things simpler, imagine a very simple autonomous dynamical system
$$
frac{{rm d}u}{{rm d}x} = alpha (u - u_0) tag{1}
$$
for some constants $alpha$ and $u_0$. The solutions to this system are of the form
$$
u(x) - u_0 = ce^{alpha x} tag{2}
$$
The interesting thing to note here is that if $alpha > 0$ then the distance between $u(x)$ and $u_0$ grows exponentially with increasing $x$, so $u = u_0$ is said to be unstable. Whereas if $alpha < 0$ the opposite occurs, and the distance between $u$ and $u_0$ shrinks with increasing $x$, in this case $u = u_0$ is stable.
Now let's make things a bit more general. Imagine a system of the form
$$
frac{{rm d}u}{{rm d}x} = f(u) tag{3}
$$
and suppose there exists a point $u_0$ such that $f(u_0) = 0$ (like in Eq. (1)), if you want to understand the local stability of (3) you could Taylor expand $f$ around $u_0$
$$
f(u) = f(u_0) + left.frac{{rm d}f}{{rm d}u}right|_{u = u_0}(u - u_0) + cdots tag{4}
$$
Remember that $f(u_0) = 0$, so at first order $f(u)approx f'(u_0)(u - u_0)$, and Eq. (3)
$$
frac{{rm d}u}{{rm d}x} approx underbrace{f'(u_0)}_{alpha}(u- u_0) tag{5}
$$
Now compare this with Eq. (1) and you realize that in order to understand the stability of the system around the point $u_0$ you need to know the value of $f'(u_0) $ a.k.a the Jacobian. There's a lot of caveats here you should be aware of, probably your text talks about them (e.g. what happens if $f'(u_0) = 0$, ...)
EDIT
Now imagine a system in two dimensions, something like
$$
frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = g(u, v)tag{6}
$$
You can define vectors ${bf z} = {u choose v}$ and ${bf F} = {f choose g}$ so that the system above can be written as
$$
frac{{rm d}{bf z}}{{rm d}x} = {bf F}({bf z}) tag{7}
$$
In this case nothing changes much, you can repeat the same analysis as in the first part and realize that the stability of the system around a point ${bf z} = {bf z}_0$ is given by the eigenvalues of the Jacobian evaluated at that location. And as before we require that $color{blue}{{bf F}({bf z}_0) = 0}$. I highlight this condition because it will become important later.
Now to the final part. Instead of an autonomous system, consider a system of the form
$$
frac{{rm d}u}{{rm d}x} = f(x, u) tag{8}
$$
You could rename $v = x$ (that is, create a new state), and note that
$$
frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = 1 tag{9}
$$
So in theory you could repeat the same analysis all over again, but, you can see from this that the resulting field ${bf F}$ never vanishes (that is, the blue expression above can never be satisfied)
Thanks for the detailed answer! It's exactly the stuff I'm reading right now. However my question was not so much about stability but more about what it meant to take the derivative of sin(x) and cos(x) with respect to u
– FredV
Nov 23 at 22:30
@FredV Sorry, I extended my answer, hopefully it makes more sense now. Otherwise I will gladly delete it
– caverac
Nov 24 at 9:43
add a comment |
up vote
2
down vote
To make things simpler, imagine a very simple autonomous dynamical system
$$
frac{{rm d}u}{{rm d}x} = alpha (u - u_0) tag{1}
$$
for some constants $alpha$ and $u_0$. The solutions to this system are of the form
$$
u(x) - u_0 = ce^{alpha x} tag{2}
$$
The interesting thing to note here is that if $alpha > 0$ then the distance between $u(x)$ and $u_0$ grows exponentially with increasing $x$, so $u = u_0$ is said to be unstable. Whereas if $alpha < 0$ the opposite occurs, and the distance between $u$ and $u_0$ shrinks with increasing $x$, in this case $u = u_0$ is stable.
Now let's make things a bit more general. Imagine a system of the form
$$
frac{{rm d}u}{{rm d}x} = f(u) tag{3}
$$
and suppose there exists a point $u_0$ such that $f(u_0) = 0$ (like in Eq. (1)), if you want to understand the local stability of (3) you could Taylor expand $f$ around $u_0$
$$
f(u) = f(u_0) + left.frac{{rm d}f}{{rm d}u}right|_{u = u_0}(u - u_0) + cdots tag{4}
$$
Remember that $f(u_0) = 0$, so at first order $f(u)approx f'(u_0)(u - u_0)$, and Eq. (3)
$$
frac{{rm d}u}{{rm d}x} approx underbrace{f'(u_0)}_{alpha}(u- u_0) tag{5}
$$
Now compare this with Eq. (1) and you realize that in order to understand the stability of the system around the point $u_0$ you need to know the value of $f'(u_0) $ a.k.a the Jacobian. There's a lot of caveats here you should be aware of, probably your text talks about them (e.g. what happens if $f'(u_0) = 0$, ...)
EDIT
Now imagine a system in two dimensions, something like
$$
frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = g(u, v)tag{6}
$$
You can define vectors ${bf z} = {u choose v}$ and ${bf F} = {f choose g}$ so that the system above can be written as
$$
frac{{rm d}{bf z}}{{rm d}x} = {bf F}({bf z}) tag{7}
$$
In this case nothing changes much, you can repeat the same analysis as in the first part and realize that the stability of the system around a point ${bf z} = {bf z}_0$ is given by the eigenvalues of the Jacobian evaluated at that location. And as before we require that $color{blue}{{bf F}({bf z}_0) = 0}$. I highlight this condition because it will become important later.
Now to the final part. Instead of an autonomous system, consider a system of the form
$$
frac{{rm d}u}{{rm d}x} = f(x, u) tag{8}
$$
You could rename $v = x$ (that is, create a new state), and note that
$$
frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = 1 tag{9}
$$
So in theory you could repeat the same analysis all over again, but, you can see from this that the resulting field ${bf F}$ never vanishes (that is, the blue expression above can never be satisfied)
Thanks for the detailed answer! It's exactly the stuff I'm reading right now. However my question was not so much about stability but more about what it meant to take the derivative of sin(x) and cos(x) with respect to u
– FredV
Nov 23 at 22:30
@FredV Sorry, I extended my answer, hopefully it makes more sense now. Otherwise I will gladly delete it
– caverac
Nov 24 at 9:43
add a comment |
up vote
2
down vote
up vote
2
down vote
To make things simpler, imagine a very simple autonomous dynamical system
$$
frac{{rm d}u}{{rm d}x} = alpha (u - u_0) tag{1}
$$
for some constants $alpha$ and $u_0$. The solutions to this system are of the form
$$
u(x) - u_0 = ce^{alpha x} tag{2}
$$
The interesting thing to note here is that if $alpha > 0$ then the distance between $u(x)$ and $u_0$ grows exponentially with increasing $x$, so $u = u_0$ is said to be unstable. Whereas if $alpha < 0$ the opposite occurs, and the distance between $u$ and $u_0$ shrinks with increasing $x$, in this case $u = u_0$ is stable.
Now let's make things a bit more general. Imagine a system of the form
$$
frac{{rm d}u}{{rm d}x} = f(u) tag{3}
$$
and suppose there exists a point $u_0$ such that $f(u_0) = 0$ (like in Eq. (1)), if you want to understand the local stability of (3) you could Taylor expand $f$ around $u_0$
$$
f(u) = f(u_0) + left.frac{{rm d}f}{{rm d}u}right|_{u = u_0}(u - u_0) + cdots tag{4}
$$
Remember that $f(u_0) = 0$, so at first order $f(u)approx f'(u_0)(u - u_0)$, and Eq. (3)
$$
frac{{rm d}u}{{rm d}x} approx underbrace{f'(u_0)}_{alpha}(u- u_0) tag{5}
$$
Now compare this with Eq. (1) and you realize that in order to understand the stability of the system around the point $u_0$ you need to know the value of $f'(u_0) $ a.k.a the Jacobian. There's a lot of caveats here you should be aware of, probably your text talks about them (e.g. what happens if $f'(u_0) = 0$, ...)
EDIT
Now imagine a system in two dimensions, something like
$$
frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = g(u, v)tag{6}
$$
You can define vectors ${bf z} = {u choose v}$ and ${bf F} = {f choose g}$ so that the system above can be written as
$$
frac{{rm d}{bf z}}{{rm d}x} = {bf F}({bf z}) tag{7}
$$
In this case nothing changes much, you can repeat the same analysis as in the first part and realize that the stability of the system around a point ${bf z} = {bf z}_0$ is given by the eigenvalues of the Jacobian evaluated at that location. And as before we require that $color{blue}{{bf F}({bf z}_0) = 0}$. I highlight this condition because it will become important later.
Now to the final part. Instead of an autonomous system, consider a system of the form
$$
frac{{rm d}u}{{rm d}x} = f(x, u) tag{8}
$$
You could rename $v = x$ (that is, create a new state), and note that
$$
frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = 1 tag{9}
$$
So in theory you could repeat the same analysis all over again, but, you can see from this that the resulting field ${bf F}$ never vanishes (that is, the blue expression above can never be satisfied)
To make things simpler, imagine a very simple autonomous dynamical system
$$
frac{{rm d}u}{{rm d}x} = alpha (u - u_0) tag{1}
$$
for some constants $alpha$ and $u_0$. The solutions to this system are of the form
$$
u(x) - u_0 = ce^{alpha x} tag{2}
$$
The interesting thing to note here is that if $alpha > 0$ then the distance between $u(x)$ and $u_0$ grows exponentially with increasing $x$, so $u = u_0$ is said to be unstable. Whereas if $alpha < 0$ the opposite occurs, and the distance between $u$ and $u_0$ shrinks with increasing $x$, in this case $u = u_0$ is stable.
Now let's make things a bit more general. Imagine a system of the form
$$
frac{{rm d}u}{{rm d}x} = f(u) tag{3}
$$
and suppose there exists a point $u_0$ such that $f(u_0) = 0$ (like in Eq. (1)), if you want to understand the local stability of (3) you could Taylor expand $f$ around $u_0$
$$
f(u) = f(u_0) + left.frac{{rm d}f}{{rm d}u}right|_{u = u_0}(u - u_0) + cdots tag{4}
$$
Remember that $f(u_0) = 0$, so at first order $f(u)approx f'(u_0)(u - u_0)$, and Eq. (3)
$$
frac{{rm d}u}{{rm d}x} approx underbrace{f'(u_0)}_{alpha}(u- u_0) tag{5}
$$
Now compare this with Eq. (1) and you realize that in order to understand the stability of the system around the point $u_0$ you need to know the value of $f'(u_0) $ a.k.a the Jacobian. There's a lot of caveats here you should be aware of, probably your text talks about them (e.g. what happens if $f'(u_0) = 0$, ...)
EDIT
Now imagine a system in two dimensions, something like
$$
frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = g(u, v)tag{6}
$$
You can define vectors ${bf z} = {u choose v}$ and ${bf F} = {f choose g}$ so that the system above can be written as
$$
frac{{rm d}{bf z}}{{rm d}x} = {bf F}({bf z}) tag{7}
$$
In this case nothing changes much, you can repeat the same analysis as in the first part and realize that the stability of the system around a point ${bf z} = {bf z}_0$ is given by the eigenvalues of the Jacobian evaluated at that location. And as before we require that $color{blue}{{bf F}({bf z}_0) = 0}$. I highlight this condition because it will become important later.
Now to the final part. Instead of an autonomous system, consider a system of the form
$$
frac{{rm d}u}{{rm d}x} = f(x, u) tag{8}
$$
You could rename $v = x$ (that is, create a new state), and note that
$$
frac{{rm d}u}{{rm d}x} = f(u, v) ~~~~ frac{{rm d}v}{{rm d}x} = 1 tag{9}
$$
So in theory you could repeat the same analysis all over again, but, you can see from this that the resulting field ${bf F}$ never vanishes (that is, the blue expression above can never be satisfied)
edited Nov 24 at 9:42
answered Nov 23 at 22:07
caverac
11.8k21027
11.8k21027
Thanks for the detailed answer! It's exactly the stuff I'm reading right now. However my question was not so much about stability but more about what it meant to take the derivative of sin(x) and cos(x) with respect to u
– FredV
Nov 23 at 22:30
@FredV Sorry, I extended my answer, hopefully it makes more sense now. Otherwise I will gladly delete it
– caverac
Nov 24 at 9:43
add a comment |
Thanks for the detailed answer! It's exactly the stuff I'm reading right now. However my question was not so much about stability but more about what it meant to take the derivative of sin(x) and cos(x) with respect to u
– FredV
Nov 23 at 22:30
@FredV Sorry, I extended my answer, hopefully it makes more sense now. Otherwise I will gladly delete it
– caverac
Nov 24 at 9:43
Thanks for the detailed answer! It's exactly the stuff I'm reading right now. However my question was not so much about stability but more about what it meant to take the derivative of sin(x) and cos(x) with respect to u
– FredV
Nov 23 at 22:30
Thanks for the detailed answer! It's exactly the stuff I'm reading right now. However my question was not so much about stability but more about what it meant to take the derivative of sin(x) and cos(x) with respect to u
– FredV
Nov 23 at 22:30
@FredV Sorry, I extended my answer, hopefully it makes more sense now. Otherwise I will gladly delete it
– caverac
Nov 24 at 9:43
@FredV Sorry, I extended my answer, hopefully it makes more sense now. Otherwise I will gladly delete it
– caverac
Nov 24 at 9:43
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3010869%2fwhy-can-we-treat-a-function-of-a-variable-as-another-independent-variable%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
"Most of the time, when we take a derivative the variables are independant, which is not the case here as they both depend on the same variable x." I don't understand. When you take the derivative of a function with one variable there is only one variable and you don't treat the variables as independent. As $sin$ and $cos$ are both dependent on one variable this is a function with one variable. Can you give an example of a case where "take a derivative the variables are independant"
– fleablood
Nov 23 at 22:02
1
"This means that the "rate of change of sin(x) with respect to u(x)" is zero, but the value of u(x) only changes if the value of x itself changes, so shouldn't the value of sin(x) change as well?" Of course it does. But that is with respect to $x$ and not with respect to $u(x)$.
– fleablood
Nov 23 at 22:08
@fleablood Thanks for your answers, I thought because both sin(x) and u(x) depend on x, that sin(x) would also depend on u(x) but as I read this it makes sense that these two functions are not linked to each other although they both depend on the same variable
– FredV
Nov 23 at 22:27
1
When you compute the linear expansion, you do it for the function $f:Bbb R^{1+n}toBbb R^n$. You get $f(x,u)approx f(x, u_0)+∂_uf(x,u_0)(u-u_0)$. Then you approximate the ODE $u'(x)=f(x,u(x))$ with the linearization of the right side. Further using $u=u_0+v$, this reads as $v'(x)=f(x,u_0)+∂_uf(x,u_0)v(x)$.
– LutzL
Nov 24 at 15:11