How to compute the directional derivative of a vector field?












1














Suppose we are given a vector field $vec{a}$ such that



$$vec{a}(x_1,ldots,x_n)=sum_{i=1}^{k}f_i(x_1,ldots,x_n)vec{e_i} $$



where



$$mathbf{S}={vec{e_1},ldots,vec{e_k}}$$
is some constant, orthonormal basis of $Bbb{R}^k$.



What follows is to be taken with a cellar of salt. To compute the directional derivative, we start with the gradient. Its components are given by the matrix $mathbf{G}$:



$$mathbf{G}=begin{bmatrix}frac{partial f_1(x_1,ldots,x_n)}{partial x_1} & cdots &frac{partial f_1(x_1,ldots,x_n)}{partial x_n}\ vdots & ddots & vdots\frac{partial f_k(x_1,ldots,x_n)}{partial x_1}&cdots&frac{partial f_k(x_1,ldots,x_n)}{partial x_n}end{bmatrix}.$$



The gradient $vec{nabla}vec{a}$ itself is given by the double sum



$$vec{nabla}vec{a}=sum_{i=1}^{k}sum_{j=1}^{n}frac{partial f_i(x_1,ldots,x_n)}{partial x_j}vec{e_i}otimesvec{e_j}.$$
When dealing with scalar-valued functions, the derivative in the direction of some vector $vec{u}$ would be the projection of the gradient onto $vec{u}$.



Assuming this still holds, the directional derivative $mathrm{D}_{vec{u}}(vec{a})$ of $vec{a}$ is



$$mathrm{D}_{vec{u}}(vec{a})=vec{nabla}vec{a}cdotfrac{vec{u}}{|vec{u}|}.$$



Substituting in our double sum:



$$mathrm{D}_{vec{u}}(vec{a})=frac{vec{u}}{|vec{u}|}sum_{i=1}^{k}sum_{j=1}^{n}frac{partial f_i(x_1,ldots,x_n)}{partial x_j}vec{e_i}otimesvec{e_j}.$$



Question: Is this generalisation for $mathrm{D}_{vec{u}}(vec{a})$ true?




  • If so, how does one evaluate it?

  • If not, what is the proper way to find a directional derivative of a vector field?


Appendix



The sign $otimes$ denotes the tensor product. Here, we have the tensor product of basis vectors.



Furthermore, following dyadics on Wikipidia, it seems for an orthonormal basis $$mathrm{D}_{vec{u}}(vec{a})=frac{vec{u}}{|vec{u}|}mathbf{G}.$$ So if $vec{u}=vec{e_m}$, then $$mathrm{D}_{vec{e_m}}(vec{a})=vec{e_m}mathbf{G}.$$ This makes no sense, unless it is some kind of tensor contraction... In such a case, $$mathrm{D}_{vec{e_m}}(vec{a})=begin{bmatrix}sum_{i=1}^{k}e_iG_{i1}\ vdots \ sum_{i=1}^{k}e_iG_{in}end{bmatrix}.$$



Here $e_i$ denotes the $i^{th}$ component of $vec{e_m}$; $G_{ij}$ denotes the $ij^{th}$ component of $mathbf{G}$. And since we are in an orthonormal basis, only $e_m=1neq0$:



$$mathrm{D}_{vec{e_m}}(vec{a})=begin{bmatrix}e_mG_{m1}\ vdots \ e_mG_{mn}end{bmatrix}=begin{bmatrix}G_{m1}\ vdots \ G_{mn}end{bmatrix}.$$



This seems to be the $m^{th}$ row of $mathbf{G}$ transposed. And in derivative form,



$$mathrm{D}_{vec{e_m}}(vec{a})=begin{bmatrix}frac{partial f_m(x_1,ldots,x_n)}{partial x_1}\ vdots \ frac{partial f_m(x_1,ldots,x_n)}{partial x_n}end{bmatrix}.$$










share|cite|improve this question
























  • en.wikipedia.org/wiki/Lie_derivative
    – user8960
    Oct 19 '16 at 18:56










  • @user8960: Cognisant of the possibility of seeming ignorant... Is that to say the formula I gave is not true, and the correct approach would be the Lie derivative (LD)? Or is the LD in this case equivalent to calculating as postulated? And the LD is a further generalisation for p-order tensor fields? Also, I have the non-rigorous feeling that the equation for $$mathrm{D}_{vec{u}}(vec{a})$$ simplifies quite a bit if $vec{u}$ is one of the vectors $vec{e_i}$. Is this true?
    – Linear Christmas
    Oct 19 '16 at 19:33












  • Something else to be sure of: make sure your basis vectors, $hat{e}_i$, are position independent, otherwise their derivatives will have non-trivial contributions.
    – Sean Lake
    Oct 19 '16 at 21:24










  • @SeanLake: duly noted. Will edit thread.
    – Linear Christmas
    Oct 19 '16 at 21:29










  • Isn't the directional derivative just the product of the Jacobian matrix and the direction vector?
    – Rodrigo de Azevedo
    Mar 2 at 21:00
















1














Suppose we are given a vector field $vec{a}$ such that



$$vec{a}(x_1,ldots,x_n)=sum_{i=1}^{k}f_i(x_1,ldots,x_n)vec{e_i} $$



where



$$mathbf{S}={vec{e_1},ldots,vec{e_k}}$$
is some constant, orthonormal basis of $Bbb{R}^k$.



What follows is to be taken with a cellar of salt. To compute the directional derivative, we start with the gradient. Its components are given by the matrix $mathbf{G}$:



$$mathbf{G}=begin{bmatrix}frac{partial f_1(x_1,ldots,x_n)}{partial x_1} & cdots &frac{partial f_1(x_1,ldots,x_n)}{partial x_n}\ vdots & ddots & vdots\frac{partial f_k(x_1,ldots,x_n)}{partial x_1}&cdots&frac{partial f_k(x_1,ldots,x_n)}{partial x_n}end{bmatrix}.$$



The gradient $vec{nabla}vec{a}$ itself is given by the double sum



$$vec{nabla}vec{a}=sum_{i=1}^{k}sum_{j=1}^{n}frac{partial f_i(x_1,ldots,x_n)}{partial x_j}vec{e_i}otimesvec{e_j}.$$
When dealing with scalar-valued functions, the derivative in the direction of some vector $vec{u}$ would be the projection of the gradient onto $vec{u}$.



Assuming this still holds, the directional derivative $mathrm{D}_{vec{u}}(vec{a})$ of $vec{a}$ is



$$mathrm{D}_{vec{u}}(vec{a})=vec{nabla}vec{a}cdotfrac{vec{u}}{|vec{u}|}.$$



Substituting in our double sum:



$$mathrm{D}_{vec{u}}(vec{a})=frac{vec{u}}{|vec{u}|}sum_{i=1}^{k}sum_{j=1}^{n}frac{partial f_i(x_1,ldots,x_n)}{partial x_j}vec{e_i}otimesvec{e_j}.$$



Question: Is this generalisation for $mathrm{D}_{vec{u}}(vec{a})$ true?




  • If so, how does one evaluate it?

  • If not, what is the proper way to find a directional derivative of a vector field?


Appendix



The sign $otimes$ denotes the tensor product. Here, we have the tensor product of basis vectors.



Furthermore, following dyadics on Wikipidia, it seems for an orthonormal basis $$mathrm{D}_{vec{u}}(vec{a})=frac{vec{u}}{|vec{u}|}mathbf{G}.$$ So if $vec{u}=vec{e_m}$, then $$mathrm{D}_{vec{e_m}}(vec{a})=vec{e_m}mathbf{G}.$$ This makes no sense, unless it is some kind of tensor contraction... In such a case, $$mathrm{D}_{vec{e_m}}(vec{a})=begin{bmatrix}sum_{i=1}^{k}e_iG_{i1}\ vdots \ sum_{i=1}^{k}e_iG_{in}end{bmatrix}.$$



Here $e_i$ denotes the $i^{th}$ component of $vec{e_m}$; $G_{ij}$ denotes the $ij^{th}$ component of $mathbf{G}$. And since we are in an orthonormal basis, only $e_m=1neq0$:



$$mathrm{D}_{vec{e_m}}(vec{a})=begin{bmatrix}e_mG_{m1}\ vdots \ e_mG_{mn}end{bmatrix}=begin{bmatrix}G_{m1}\ vdots \ G_{mn}end{bmatrix}.$$



This seems to be the $m^{th}$ row of $mathbf{G}$ transposed. And in derivative form,



$$mathrm{D}_{vec{e_m}}(vec{a})=begin{bmatrix}frac{partial f_m(x_1,ldots,x_n)}{partial x_1}\ vdots \ frac{partial f_m(x_1,ldots,x_n)}{partial x_n}end{bmatrix}.$$










share|cite|improve this question
























  • en.wikipedia.org/wiki/Lie_derivative
    – user8960
    Oct 19 '16 at 18:56










  • @user8960: Cognisant of the possibility of seeming ignorant... Is that to say the formula I gave is not true, and the correct approach would be the Lie derivative (LD)? Or is the LD in this case equivalent to calculating as postulated? And the LD is a further generalisation for p-order tensor fields? Also, I have the non-rigorous feeling that the equation for $$mathrm{D}_{vec{u}}(vec{a})$$ simplifies quite a bit if $vec{u}$ is one of the vectors $vec{e_i}$. Is this true?
    – Linear Christmas
    Oct 19 '16 at 19:33












  • Something else to be sure of: make sure your basis vectors, $hat{e}_i$, are position independent, otherwise their derivatives will have non-trivial contributions.
    – Sean Lake
    Oct 19 '16 at 21:24










  • @SeanLake: duly noted. Will edit thread.
    – Linear Christmas
    Oct 19 '16 at 21:29










  • Isn't the directional derivative just the product of the Jacobian matrix and the direction vector?
    – Rodrigo de Azevedo
    Mar 2 at 21:00














1












1








1


0





Suppose we are given a vector field $vec{a}$ such that



$$vec{a}(x_1,ldots,x_n)=sum_{i=1}^{k}f_i(x_1,ldots,x_n)vec{e_i} $$



where



$$mathbf{S}={vec{e_1},ldots,vec{e_k}}$$
is some constant, orthonormal basis of $Bbb{R}^k$.



What follows is to be taken with a cellar of salt. To compute the directional derivative, we start with the gradient. Its components are given by the matrix $mathbf{G}$:



$$mathbf{G}=begin{bmatrix}frac{partial f_1(x_1,ldots,x_n)}{partial x_1} & cdots &frac{partial f_1(x_1,ldots,x_n)}{partial x_n}\ vdots & ddots & vdots\frac{partial f_k(x_1,ldots,x_n)}{partial x_1}&cdots&frac{partial f_k(x_1,ldots,x_n)}{partial x_n}end{bmatrix}.$$



The gradient $vec{nabla}vec{a}$ itself is given by the double sum



$$vec{nabla}vec{a}=sum_{i=1}^{k}sum_{j=1}^{n}frac{partial f_i(x_1,ldots,x_n)}{partial x_j}vec{e_i}otimesvec{e_j}.$$
When dealing with scalar-valued functions, the derivative in the direction of some vector $vec{u}$ would be the projection of the gradient onto $vec{u}$.



Assuming this still holds, the directional derivative $mathrm{D}_{vec{u}}(vec{a})$ of $vec{a}$ is



$$mathrm{D}_{vec{u}}(vec{a})=vec{nabla}vec{a}cdotfrac{vec{u}}{|vec{u}|}.$$



Substituting in our double sum:



$$mathrm{D}_{vec{u}}(vec{a})=frac{vec{u}}{|vec{u}|}sum_{i=1}^{k}sum_{j=1}^{n}frac{partial f_i(x_1,ldots,x_n)}{partial x_j}vec{e_i}otimesvec{e_j}.$$



Question: Is this generalisation for $mathrm{D}_{vec{u}}(vec{a})$ true?




  • If so, how does one evaluate it?

  • If not, what is the proper way to find a directional derivative of a vector field?


Appendix



The sign $otimes$ denotes the tensor product. Here, we have the tensor product of basis vectors.



Furthermore, following dyadics on Wikipidia, it seems for an orthonormal basis $$mathrm{D}_{vec{u}}(vec{a})=frac{vec{u}}{|vec{u}|}mathbf{G}.$$ So if $vec{u}=vec{e_m}$, then $$mathrm{D}_{vec{e_m}}(vec{a})=vec{e_m}mathbf{G}.$$ This makes no sense, unless it is some kind of tensor contraction... In such a case, $$mathrm{D}_{vec{e_m}}(vec{a})=begin{bmatrix}sum_{i=1}^{k}e_iG_{i1}\ vdots \ sum_{i=1}^{k}e_iG_{in}end{bmatrix}.$$



Here $e_i$ denotes the $i^{th}$ component of $vec{e_m}$; $G_{ij}$ denotes the $ij^{th}$ component of $mathbf{G}$. And since we are in an orthonormal basis, only $e_m=1neq0$:



$$mathrm{D}_{vec{e_m}}(vec{a})=begin{bmatrix}e_mG_{m1}\ vdots \ e_mG_{mn}end{bmatrix}=begin{bmatrix}G_{m1}\ vdots \ G_{mn}end{bmatrix}.$$



This seems to be the $m^{th}$ row of $mathbf{G}$ transposed. And in derivative form,



$$mathrm{D}_{vec{e_m}}(vec{a})=begin{bmatrix}frac{partial f_m(x_1,ldots,x_n)}{partial x_1}\ vdots \ frac{partial f_m(x_1,ldots,x_n)}{partial x_n}end{bmatrix}.$$










share|cite|improve this question















Suppose we are given a vector field $vec{a}$ such that



$$vec{a}(x_1,ldots,x_n)=sum_{i=1}^{k}f_i(x_1,ldots,x_n)vec{e_i} $$



where



$$mathbf{S}={vec{e_1},ldots,vec{e_k}}$$
is some constant, orthonormal basis of $Bbb{R}^k$.



What follows is to be taken with a cellar of salt. To compute the directional derivative, we start with the gradient. Its components are given by the matrix $mathbf{G}$:



$$mathbf{G}=begin{bmatrix}frac{partial f_1(x_1,ldots,x_n)}{partial x_1} & cdots &frac{partial f_1(x_1,ldots,x_n)}{partial x_n}\ vdots & ddots & vdots\frac{partial f_k(x_1,ldots,x_n)}{partial x_1}&cdots&frac{partial f_k(x_1,ldots,x_n)}{partial x_n}end{bmatrix}.$$



The gradient $vec{nabla}vec{a}$ itself is given by the double sum



$$vec{nabla}vec{a}=sum_{i=1}^{k}sum_{j=1}^{n}frac{partial f_i(x_1,ldots,x_n)}{partial x_j}vec{e_i}otimesvec{e_j}.$$
When dealing with scalar-valued functions, the derivative in the direction of some vector $vec{u}$ would be the projection of the gradient onto $vec{u}$.



Assuming this still holds, the directional derivative $mathrm{D}_{vec{u}}(vec{a})$ of $vec{a}$ is



$$mathrm{D}_{vec{u}}(vec{a})=vec{nabla}vec{a}cdotfrac{vec{u}}{|vec{u}|}.$$



Substituting in our double sum:



$$mathrm{D}_{vec{u}}(vec{a})=frac{vec{u}}{|vec{u}|}sum_{i=1}^{k}sum_{j=1}^{n}frac{partial f_i(x_1,ldots,x_n)}{partial x_j}vec{e_i}otimesvec{e_j}.$$



Question: Is this generalisation for $mathrm{D}_{vec{u}}(vec{a})$ true?




  • If so, how does one evaluate it?

  • If not, what is the proper way to find a directional derivative of a vector field?


Appendix



The sign $otimes$ denotes the tensor product. Here, we have the tensor product of basis vectors.



Furthermore, following dyadics on Wikipidia, it seems for an orthonormal basis $$mathrm{D}_{vec{u}}(vec{a})=frac{vec{u}}{|vec{u}|}mathbf{G}.$$ So if $vec{u}=vec{e_m}$, then $$mathrm{D}_{vec{e_m}}(vec{a})=vec{e_m}mathbf{G}.$$ This makes no sense, unless it is some kind of tensor contraction... In such a case, $$mathrm{D}_{vec{e_m}}(vec{a})=begin{bmatrix}sum_{i=1}^{k}e_iG_{i1}\ vdots \ sum_{i=1}^{k}e_iG_{in}end{bmatrix}.$$



Here $e_i$ denotes the $i^{th}$ component of $vec{e_m}$; $G_{ij}$ denotes the $ij^{th}$ component of $mathbf{G}$. And since we are in an orthonormal basis, only $e_m=1neq0$:



$$mathrm{D}_{vec{e_m}}(vec{a})=begin{bmatrix}e_mG_{m1}\ vdots \ e_mG_{mn}end{bmatrix}=begin{bmatrix}G_{m1}\ vdots \ G_{mn}end{bmatrix}.$$



This seems to be the $m^{th}$ row of $mathbf{G}$ transposed. And in derivative form,



$$mathrm{D}_{vec{e_m}}(vec{a})=begin{bmatrix}frac{partial f_m(x_1,ldots,x_n)}{partial x_1}\ vdots \ frac{partial f_m(x_1,ldots,x_n)}{partial x_n}end{bmatrix}.$$







partial-derivative vector-analysis tensor-products matrix-calculus vector-fields






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Mar 2 at 20:48









Rodrigo de Azevedo

12.8k41855




12.8k41855










asked Oct 19 '16 at 18:47









Linear Christmas

372314




372314












  • en.wikipedia.org/wiki/Lie_derivative
    – user8960
    Oct 19 '16 at 18:56










  • @user8960: Cognisant of the possibility of seeming ignorant... Is that to say the formula I gave is not true, and the correct approach would be the Lie derivative (LD)? Or is the LD in this case equivalent to calculating as postulated? And the LD is a further generalisation for p-order tensor fields? Also, I have the non-rigorous feeling that the equation for $$mathrm{D}_{vec{u}}(vec{a})$$ simplifies quite a bit if $vec{u}$ is one of the vectors $vec{e_i}$. Is this true?
    – Linear Christmas
    Oct 19 '16 at 19:33












  • Something else to be sure of: make sure your basis vectors, $hat{e}_i$, are position independent, otherwise their derivatives will have non-trivial contributions.
    – Sean Lake
    Oct 19 '16 at 21:24










  • @SeanLake: duly noted. Will edit thread.
    – Linear Christmas
    Oct 19 '16 at 21:29










  • Isn't the directional derivative just the product of the Jacobian matrix and the direction vector?
    – Rodrigo de Azevedo
    Mar 2 at 21:00


















  • en.wikipedia.org/wiki/Lie_derivative
    – user8960
    Oct 19 '16 at 18:56










  • @user8960: Cognisant of the possibility of seeming ignorant... Is that to say the formula I gave is not true, and the correct approach would be the Lie derivative (LD)? Or is the LD in this case equivalent to calculating as postulated? And the LD is a further generalisation for p-order tensor fields? Also, I have the non-rigorous feeling that the equation for $$mathrm{D}_{vec{u}}(vec{a})$$ simplifies quite a bit if $vec{u}$ is one of the vectors $vec{e_i}$. Is this true?
    – Linear Christmas
    Oct 19 '16 at 19:33












  • Something else to be sure of: make sure your basis vectors, $hat{e}_i$, are position independent, otherwise their derivatives will have non-trivial contributions.
    – Sean Lake
    Oct 19 '16 at 21:24










  • @SeanLake: duly noted. Will edit thread.
    – Linear Christmas
    Oct 19 '16 at 21:29










  • Isn't the directional derivative just the product of the Jacobian matrix and the direction vector?
    – Rodrigo de Azevedo
    Mar 2 at 21:00
















en.wikipedia.org/wiki/Lie_derivative
– user8960
Oct 19 '16 at 18:56




en.wikipedia.org/wiki/Lie_derivative
– user8960
Oct 19 '16 at 18:56












@user8960: Cognisant of the possibility of seeming ignorant... Is that to say the formula I gave is not true, and the correct approach would be the Lie derivative (LD)? Or is the LD in this case equivalent to calculating as postulated? And the LD is a further generalisation for p-order tensor fields? Also, I have the non-rigorous feeling that the equation for $$mathrm{D}_{vec{u}}(vec{a})$$ simplifies quite a bit if $vec{u}$ is one of the vectors $vec{e_i}$. Is this true?
– Linear Christmas
Oct 19 '16 at 19:33






@user8960: Cognisant of the possibility of seeming ignorant... Is that to say the formula I gave is not true, and the correct approach would be the Lie derivative (LD)? Or is the LD in this case equivalent to calculating as postulated? And the LD is a further generalisation for p-order tensor fields? Also, I have the non-rigorous feeling that the equation for $$mathrm{D}_{vec{u}}(vec{a})$$ simplifies quite a bit if $vec{u}$ is one of the vectors $vec{e_i}$. Is this true?
– Linear Christmas
Oct 19 '16 at 19:33














Something else to be sure of: make sure your basis vectors, $hat{e}_i$, are position independent, otherwise their derivatives will have non-trivial contributions.
– Sean Lake
Oct 19 '16 at 21:24




Something else to be sure of: make sure your basis vectors, $hat{e}_i$, are position independent, otherwise their derivatives will have non-trivial contributions.
– Sean Lake
Oct 19 '16 at 21:24












@SeanLake: duly noted. Will edit thread.
– Linear Christmas
Oct 19 '16 at 21:29




@SeanLake: duly noted. Will edit thread.
– Linear Christmas
Oct 19 '16 at 21:29












Isn't the directional derivative just the product of the Jacobian matrix and the direction vector?
– Rodrigo de Azevedo
Mar 2 at 21:00




Isn't the directional derivative just the product of the Jacobian matrix and the direction vector?
– Rodrigo de Azevedo
Mar 2 at 21:00










1 Answer
1






active

oldest

votes


















0














To generalize, let's first go back a little and talk about the directional derivative of a scalar-valued function $f(vec{x})$ of a vector variable $vec{x}$ in a general and invariant language. If $vec{d}$ is a direction vector (unit length), then the directional derivative of $f$ at $vec{x} = vec{x}_{0}$ in the direction $vec{d}$ can be defined as follows:



It is the image of the linear transformation ${df over dvec{x}}( vec{x}_{0})$ acting on the vector $vec{d}$.



Thus, the generalization consists in replacing the scalar funtion $f$ by a vector-valued one, $vec{f}$, and writing down the invariant definition of the derivative
$$
{dvec{f} over dvec{x}}( vec{x}_{0}).
$$
This derivative is, by definition, a certain linear transformation from (the tangent space at $vec{x}_{0}$ of the domain of $vec{f}$) to (the tangent space at $vec{f}(vec{x}_{0})$ of the range of $vec{f}$).



The specific defining properties of this linear transformation can (and should be at first) stated without resorting to bases or tensor representations, and are described on page 66 of this book: https://books.google.com/books?id=JUoyqlW7PZgC&printsec=frontcover&dq=arnold+ordinary+differential+equations&hl=en&sa=X&ved=0ahUKEwjGv_y44OfPAhXDSSYKHXvZCC4Q6AEIHjAA#v=onepage&q=The%20action%20of%20diffeomorphisms&f=false






share|cite|improve this answer





















  • Could you give an assessment for the Appendix section?
    – Linear Christmas
    Oct 20 '16 at 15:55










  • Not sure what section you are referring to, and what you mean by assessment.
    – user8960
    Oct 20 '16 at 17:52










  • The OP (here: the original post) has a section titled Appendix. I was wondering whether the last formula makes sense (A). Also, what specifically is wrong with multiplying the gradient of a vector field with a unit vector? (B)
    – Linear Christmas
    Oct 20 '16 at 18:52












  • (A): see my response to (B).:) (B) Not necessarily anything wrong. The gradient of a vector field is generally a linear transformation. So, we need to be specific about what we mean by "multiplying a vector by a linear transformation". This will determine whether the last formula in your Appendix makes sense. And this presence of sense is most conveniently examined using invariant definitions (i.e., those independent of a specific choice of a coordinate system).
    – user8960
    Oct 20 '16 at 18:57






  • 1




    Let us continue this discussion in chat.
    – user8960
    Oct 21 '16 at 18:40











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1976114%2fhow-to-compute-the-directional-derivative-of-a-vector-field%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














To generalize, let's first go back a little and talk about the directional derivative of a scalar-valued function $f(vec{x})$ of a vector variable $vec{x}$ in a general and invariant language. If $vec{d}$ is a direction vector (unit length), then the directional derivative of $f$ at $vec{x} = vec{x}_{0}$ in the direction $vec{d}$ can be defined as follows:



It is the image of the linear transformation ${df over dvec{x}}( vec{x}_{0})$ acting on the vector $vec{d}$.



Thus, the generalization consists in replacing the scalar funtion $f$ by a vector-valued one, $vec{f}$, and writing down the invariant definition of the derivative
$$
{dvec{f} over dvec{x}}( vec{x}_{0}).
$$
This derivative is, by definition, a certain linear transformation from (the tangent space at $vec{x}_{0}$ of the domain of $vec{f}$) to (the tangent space at $vec{f}(vec{x}_{0})$ of the range of $vec{f}$).



The specific defining properties of this linear transformation can (and should be at first) stated without resorting to bases or tensor representations, and are described on page 66 of this book: https://books.google.com/books?id=JUoyqlW7PZgC&printsec=frontcover&dq=arnold+ordinary+differential+equations&hl=en&sa=X&ved=0ahUKEwjGv_y44OfPAhXDSSYKHXvZCC4Q6AEIHjAA#v=onepage&q=The%20action%20of%20diffeomorphisms&f=false






share|cite|improve this answer





















  • Could you give an assessment for the Appendix section?
    – Linear Christmas
    Oct 20 '16 at 15:55










  • Not sure what section you are referring to, and what you mean by assessment.
    – user8960
    Oct 20 '16 at 17:52










  • The OP (here: the original post) has a section titled Appendix. I was wondering whether the last formula makes sense (A). Also, what specifically is wrong with multiplying the gradient of a vector field with a unit vector? (B)
    – Linear Christmas
    Oct 20 '16 at 18:52












  • (A): see my response to (B).:) (B) Not necessarily anything wrong. The gradient of a vector field is generally a linear transformation. So, we need to be specific about what we mean by "multiplying a vector by a linear transformation". This will determine whether the last formula in your Appendix makes sense. And this presence of sense is most conveniently examined using invariant definitions (i.e., those independent of a specific choice of a coordinate system).
    – user8960
    Oct 20 '16 at 18:57






  • 1




    Let us continue this discussion in chat.
    – user8960
    Oct 21 '16 at 18:40
















0














To generalize, let's first go back a little and talk about the directional derivative of a scalar-valued function $f(vec{x})$ of a vector variable $vec{x}$ in a general and invariant language. If $vec{d}$ is a direction vector (unit length), then the directional derivative of $f$ at $vec{x} = vec{x}_{0}$ in the direction $vec{d}$ can be defined as follows:



It is the image of the linear transformation ${df over dvec{x}}( vec{x}_{0})$ acting on the vector $vec{d}$.



Thus, the generalization consists in replacing the scalar funtion $f$ by a vector-valued one, $vec{f}$, and writing down the invariant definition of the derivative
$$
{dvec{f} over dvec{x}}( vec{x}_{0}).
$$
This derivative is, by definition, a certain linear transformation from (the tangent space at $vec{x}_{0}$ of the domain of $vec{f}$) to (the tangent space at $vec{f}(vec{x}_{0})$ of the range of $vec{f}$).



The specific defining properties of this linear transformation can (and should be at first) stated without resorting to bases or tensor representations, and are described on page 66 of this book: https://books.google.com/books?id=JUoyqlW7PZgC&printsec=frontcover&dq=arnold+ordinary+differential+equations&hl=en&sa=X&ved=0ahUKEwjGv_y44OfPAhXDSSYKHXvZCC4Q6AEIHjAA#v=onepage&q=The%20action%20of%20diffeomorphisms&f=false






share|cite|improve this answer





















  • Could you give an assessment for the Appendix section?
    – Linear Christmas
    Oct 20 '16 at 15:55










  • Not sure what section you are referring to, and what you mean by assessment.
    – user8960
    Oct 20 '16 at 17:52










  • The OP (here: the original post) has a section titled Appendix. I was wondering whether the last formula makes sense (A). Also, what specifically is wrong with multiplying the gradient of a vector field with a unit vector? (B)
    – Linear Christmas
    Oct 20 '16 at 18:52












  • (A): see my response to (B).:) (B) Not necessarily anything wrong. The gradient of a vector field is generally a linear transformation. So, we need to be specific about what we mean by "multiplying a vector by a linear transformation". This will determine whether the last formula in your Appendix makes sense. And this presence of sense is most conveniently examined using invariant definitions (i.e., those independent of a specific choice of a coordinate system).
    – user8960
    Oct 20 '16 at 18:57






  • 1




    Let us continue this discussion in chat.
    – user8960
    Oct 21 '16 at 18:40














0












0








0






To generalize, let's first go back a little and talk about the directional derivative of a scalar-valued function $f(vec{x})$ of a vector variable $vec{x}$ in a general and invariant language. If $vec{d}$ is a direction vector (unit length), then the directional derivative of $f$ at $vec{x} = vec{x}_{0}$ in the direction $vec{d}$ can be defined as follows:



It is the image of the linear transformation ${df over dvec{x}}( vec{x}_{0})$ acting on the vector $vec{d}$.



Thus, the generalization consists in replacing the scalar funtion $f$ by a vector-valued one, $vec{f}$, and writing down the invariant definition of the derivative
$$
{dvec{f} over dvec{x}}( vec{x}_{0}).
$$
This derivative is, by definition, a certain linear transformation from (the tangent space at $vec{x}_{0}$ of the domain of $vec{f}$) to (the tangent space at $vec{f}(vec{x}_{0})$ of the range of $vec{f}$).



The specific defining properties of this linear transformation can (and should be at first) stated without resorting to bases or tensor representations, and are described on page 66 of this book: https://books.google.com/books?id=JUoyqlW7PZgC&printsec=frontcover&dq=arnold+ordinary+differential+equations&hl=en&sa=X&ved=0ahUKEwjGv_y44OfPAhXDSSYKHXvZCC4Q6AEIHjAA#v=onepage&q=The%20action%20of%20diffeomorphisms&f=false






share|cite|improve this answer












To generalize, let's first go back a little and talk about the directional derivative of a scalar-valued function $f(vec{x})$ of a vector variable $vec{x}$ in a general and invariant language. If $vec{d}$ is a direction vector (unit length), then the directional derivative of $f$ at $vec{x} = vec{x}_{0}$ in the direction $vec{d}$ can be defined as follows:



It is the image of the linear transformation ${df over dvec{x}}( vec{x}_{0})$ acting on the vector $vec{d}$.



Thus, the generalization consists in replacing the scalar funtion $f$ by a vector-valued one, $vec{f}$, and writing down the invariant definition of the derivative
$$
{dvec{f} over dvec{x}}( vec{x}_{0}).
$$
This derivative is, by definition, a certain linear transformation from (the tangent space at $vec{x}_{0}$ of the domain of $vec{f}$) to (the tangent space at $vec{f}(vec{x}_{0})$ of the range of $vec{f}$).



The specific defining properties of this linear transformation can (and should be at first) stated without resorting to bases or tensor representations, and are described on page 66 of this book: https://books.google.com/books?id=JUoyqlW7PZgC&printsec=frontcover&dq=arnold+ordinary+differential+equations&hl=en&sa=X&ved=0ahUKEwjGv_y44OfPAhXDSSYKHXvZCC4Q6AEIHjAA#v=onepage&q=The%20action%20of%20diffeomorphisms&f=false







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Oct 19 '16 at 20:56









user8960

66536




66536












  • Could you give an assessment for the Appendix section?
    – Linear Christmas
    Oct 20 '16 at 15:55










  • Not sure what section you are referring to, and what you mean by assessment.
    – user8960
    Oct 20 '16 at 17:52










  • The OP (here: the original post) has a section titled Appendix. I was wondering whether the last formula makes sense (A). Also, what specifically is wrong with multiplying the gradient of a vector field with a unit vector? (B)
    – Linear Christmas
    Oct 20 '16 at 18:52












  • (A): see my response to (B).:) (B) Not necessarily anything wrong. The gradient of a vector field is generally a linear transformation. So, we need to be specific about what we mean by "multiplying a vector by a linear transformation". This will determine whether the last formula in your Appendix makes sense. And this presence of sense is most conveniently examined using invariant definitions (i.e., those independent of a specific choice of a coordinate system).
    – user8960
    Oct 20 '16 at 18:57






  • 1




    Let us continue this discussion in chat.
    – user8960
    Oct 21 '16 at 18:40


















  • Could you give an assessment for the Appendix section?
    – Linear Christmas
    Oct 20 '16 at 15:55










  • Not sure what section you are referring to, and what you mean by assessment.
    – user8960
    Oct 20 '16 at 17:52










  • The OP (here: the original post) has a section titled Appendix. I was wondering whether the last formula makes sense (A). Also, what specifically is wrong with multiplying the gradient of a vector field with a unit vector? (B)
    – Linear Christmas
    Oct 20 '16 at 18:52












  • (A): see my response to (B).:) (B) Not necessarily anything wrong. The gradient of a vector field is generally a linear transformation. So, we need to be specific about what we mean by "multiplying a vector by a linear transformation". This will determine whether the last formula in your Appendix makes sense. And this presence of sense is most conveniently examined using invariant definitions (i.e., those independent of a specific choice of a coordinate system).
    – user8960
    Oct 20 '16 at 18:57






  • 1




    Let us continue this discussion in chat.
    – user8960
    Oct 21 '16 at 18:40
















Could you give an assessment for the Appendix section?
– Linear Christmas
Oct 20 '16 at 15:55




Could you give an assessment for the Appendix section?
– Linear Christmas
Oct 20 '16 at 15:55












Not sure what section you are referring to, and what you mean by assessment.
– user8960
Oct 20 '16 at 17:52




Not sure what section you are referring to, and what you mean by assessment.
– user8960
Oct 20 '16 at 17:52












The OP (here: the original post) has a section titled Appendix. I was wondering whether the last formula makes sense (A). Also, what specifically is wrong with multiplying the gradient of a vector field with a unit vector? (B)
– Linear Christmas
Oct 20 '16 at 18:52






The OP (here: the original post) has a section titled Appendix. I was wondering whether the last formula makes sense (A). Also, what specifically is wrong with multiplying the gradient of a vector field with a unit vector? (B)
– Linear Christmas
Oct 20 '16 at 18:52














(A): see my response to (B).:) (B) Not necessarily anything wrong. The gradient of a vector field is generally a linear transformation. So, we need to be specific about what we mean by "multiplying a vector by a linear transformation". This will determine whether the last formula in your Appendix makes sense. And this presence of sense is most conveniently examined using invariant definitions (i.e., those independent of a specific choice of a coordinate system).
– user8960
Oct 20 '16 at 18:57




(A): see my response to (B).:) (B) Not necessarily anything wrong. The gradient of a vector field is generally a linear transformation. So, we need to be specific about what we mean by "multiplying a vector by a linear transformation". This will determine whether the last formula in your Appendix makes sense. And this presence of sense is most conveniently examined using invariant definitions (i.e., those independent of a specific choice of a coordinate system).
– user8960
Oct 20 '16 at 18:57




1




1




Let us continue this discussion in chat.
– user8960
Oct 21 '16 at 18:40




Let us continue this discussion in chat.
– user8960
Oct 21 '16 at 18:40


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1976114%2fhow-to-compute-the-directional-derivative-of-a-vector-field%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Basket-ball féminin

Different font size/position of beamer's navigation symbols template's content depending on regular/plain...

I want to find a topological embedding $f : X rightarrow Y$ and $g: Y rightarrow X$, yet $X$ is not...