Is division of matrices possible?
Is it possible to divide a matrix by another? If yes, What will be the result of $dfrac AB$ if
$$
A = begin{pmatrix}
a & b \
c & d \
end{pmatrix},
B = begin{pmatrix}
w & x \
y & z \
end{pmatrix}?
$$
matrices divisibility
add a comment |
Is it possible to divide a matrix by another? If yes, What will be the result of $dfrac AB$ if
$$
A = begin{pmatrix}
a & b \
c & d \
end{pmatrix},
B = begin{pmatrix}
w & x \
y & z \
end{pmatrix}?
$$
matrices divisibility
6
see for example here. You'll have to distinguish between $AB^{-1}$ and $B^{-1}A$.
– Raymond Manzoni
Nov 3 '12 at 16:43
To add various types of brackets around entries of a matrix, you can use pmatrix or bmatrix or other variants. See tutorial at meta.
– Martin Sleziak
Nov 3 '12 at 16:49
add a comment |
Is it possible to divide a matrix by another? If yes, What will be the result of $dfrac AB$ if
$$
A = begin{pmatrix}
a & b \
c & d \
end{pmatrix},
B = begin{pmatrix}
w & x \
y & z \
end{pmatrix}?
$$
matrices divisibility
Is it possible to divide a matrix by another? If yes, What will be the result of $dfrac AB$ if
$$
A = begin{pmatrix}
a & b \
c & d \
end{pmatrix},
B = begin{pmatrix}
w & x \
y & z \
end{pmatrix}?
$$
matrices divisibility
matrices divisibility
edited Nov 3 '12 at 18:21
user2468
asked Nov 3 '12 at 16:39
Pranit Bauva
4472614
4472614
6
see for example here. You'll have to distinguish between $AB^{-1}$ and $B^{-1}A$.
– Raymond Manzoni
Nov 3 '12 at 16:43
To add various types of brackets around entries of a matrix, you can use pmatrix or bmatrix or other variants. See tutorial at meta.
– Martin Sleziak
Nov 3 '12 at 16:49
add a comment |
6
see for example here. You'll have to distinguish between $AB^{-1}$ and $B^{-1}A$.
– Raymond Manzoni
Nov 3 '12 at 16:43
To add various types of brackets around entries of a matrix, you can use pmatrix or bmatrix or other variants. See tutorial at meta.
– Martin Sleziak
Nov 3 '12 at 16:49
6
6
see for example here. You'll have to distinguish between $AB^{-1}$ and $B^{-1}A$.
– Raymond Manzoni
Nov 3 '12 at 16:43
see for example here. You'll have to distinguish between $AB^{-1}$ and $B^{-1}A$.
– Raymond Manzoni
Nov 3 '12 at 16:43
To add various types of brackets around entries of a matrix, you can use pmatrix or bmatrix or other variants. See tutorial at meta.
– Martin Sleziak
Nov 3 '12 at 16:49
To add various types of brackets around entries of a matrix, you can use pmatrix or bmatrix or other variants. See tutorial at meta.
– Martin Sleziak
Nov 3 '12 at 16:49
add a comment |
7 Answers
7
active
oldest
votes
For ordinary numbers $frac{a}{b}$ means the solution to the equation $xb=a$. This is the same as $bx=a$, but since matrix multiplication is not commutative, there are two different possible generalizations of "division" to matrices.
If $B$ is invertible, then you can form $AB^{-1}$ or $B^{-1}A$, but these are not in general the same matrix. They are the solutions to $XB=A$ and $BX=A$ respectively.
If $B$ is not invertible, then $XB=A$ and $BX=A$ may have solutions, but the solutions will not be unique. So in that situation speaking of "matrix division" is even less warranted.
add a comment |
There is a way to performa sort of division , but I am not sure if it is the way you are looking for. For motivation ,consider the ordinary real numbers $mathbb{R}$ . We have that for two real numbers, $x/y$ is really the same as multiplying x and $y^{-1}=1/y$. We call $y^{-1}$ the inverse of y, and note that it has the property that $yy^{-1}=1.$
The same goes for different algebraic structures. That is, for two elements x,y in this algebraic structure we define $x/y$ as $xy^{-1}$ (under some operation). Most notably, we have a notion of division in any division ring (hence the name!) . It turns out that if you consider invertible $n times n$ matrices with addition and ordinary matrix multiplication, there is a sensible way to define division since every invertible matrix has well, an inverse. So just to help you grip what an inverse is, say that you have a 2x2 matrix $$A= begin{bmatrix} a & b \ c & d end{bmatrix}.$$
The inverse of A is then given by
$$A^{-1} = dfrac{1}{(ad-bc)} begin{bmatrix} d & -b \ -c & a end{bmatrix}$$
and you should check that $AA^{-1}=E$, the identity matrix. Now, for two matrices $B$ and $A$, $B/A = BA^{-1}$.
I know this is not the most eloquent answer, but why the downvote?
– Dedalus
Apr 2 '14 at 13:02
add a comment |
Normaly matrix division is defined as $frac{A}{B}=AB^{-1}$ where $B^{-1}$ stands for inverse matrix of $B$. In case when inverse doesn't exist so called pseudoinverse may be used.
add a comment |
We can say $$frac{A}{B}=Atimes B^{-1}$$ where $B^{-1}$ is inverse matrice of $B$
1
if $ cfrac AB = A times B^{-1}$ then what is $B^{-1} times A$?
– user31280
Nov 3 '12 at 18:29
1
That is an other possibility since product of matrices is not commutative.So division of matrices can not be defined in an unique way
– Adi Dani
Nov 3 '12 at 19:02
add a comment |
There are two issues: first, that matrices have divisors of zero; second, that matrix multiplication is in general not commutative.
To give meaning to $A/B$, you need to give meaning to $I/B$ (because then $A/B=A(I/B)$. Now, no one ever writes $I/B$, people actually write $B^{-1}$. Anyway, what is $B^{-1}$? It should be a matrix such that multiplied by $B$ gives you the identity. Now, there exist nonzero matrices $C$, $B$ with $BC=0$. If $B$ had an inverse $B^{-1}$, we would have
$$
0=B^{-1}0=B^{-1}BC=C,
$$
a contradiction. So such a matrix $B$ cannot have an inverse, i.e. "$I/B$" does not make sense.
The invertible matrices are exactly those with nonzero determinant. So, if $det Bne0$, then $AB^{-1}$ does make sense.
In you case, that would be the condition $wz-yxne0$. In that case,
$$
begin{bmatrix}w&x\ y&zend{bmatrix}^{-1}=frac1{wz-yx}begin{bmatrix}z&-x\ -y&wend{bmatrix}
$$
The second issue is a non-issue, because it can be proven that, for matrices, if $B^{-1}A=I$, then $AB^{-1}=I$.
add a comment |
Do you know why matrix multiplication is defined in such a weird way ? It is defined thus, so that the effect on a column vector of left-multiplying it by one matrix, and then left-multiplying the result by another matrix, is exactly the same as the effect of left-multiplying by a single matrix that is the product of those two matrices. That is, matrix multiplication corresponds to composition of linear operators. If you didn't know that already, then you should try to convince yourself that it's true, in a simple class of special cases. I would recommend trying a 2 by 1 column vector with variables for entries, and two 2 by 2 matrices, also with variables for entries. So "dividing by" a matrix would correspond to "undoing" the effect of a linear operator. For instance, rotating the plane clockwise by a certain angle "undoes" the anticlockwise rotation of the plane by the same angle. However, there are plenty of linear operators that can't be undone (because they have squashed something flat), and the matrices representing these operators (with respect to any given basis) must therefore be not-dividable-by. This is unlike the situation with the real numbers, for instance, in which there is only one of them by which you can't divide, namely zero.
add a comment |
Element-wise division is frequently used to normalize matrices in R o MatLab. For instance, in R if dtm
is a contingency matrix counting occurrences of something, you can normalize the rows so that all the rows sum 1.0 as:
dtm.proportions <- dtm / rowSums(dtm)
Or normalize the columns transposing the matrix first:
dtm.proportions <- t(t(dtm)/colSums(dtm))
add a comment |
protected by Community♦ Apr 1 '14 at 11:51
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
7 Answers
7
active
oldest
votes
7 Answers
7
active
oldest
votes
active
oldest
votes
active
oldest
votes
For ordinary numbers $frac{a}{b}$ means the solution to the equation $xb=a$. This is the same as $bx=a$, but since matrix multiplication is not commutative, there are two different possible generalizations of "division" to matrices.
If $B$ is invertible, then you can form $AB^{-1}$ or $B^{-1}A$, but these are not in general the same matrix. They are the solutions to $XB=A$ and $BX=A$ respectively.
If $B$ is not invertible, then $XB=A$ and $BX=A$ may have solutions, but the solutions will not be unique. So in that situation speaking of "matrix division" is even less warranted.
add a comment |
For ordinary numbers $frac{a}{b}$ means the solution to the equation $xb=a$. This is the same as $bx=a$, but since matrix multiplication is not commutative, there are two different possible generalizations of "division" to matrices.
If $B$ is invertible, then you can form $AB^{-1}$ or $B^{-1}A$, but these are not in general the same matrix. They are the solutions to $XB=A$ and $BX=A$ respectively.
If $B$ is not invertible, then $XB=A$ and $BX=A$ may have solutions, but the solutions will not be unique. So in that situation speaking of "matrix division" is even less warranted.
add a comment |
For ordinary numbers $frac{a}{b}$ means the solution to the equation $xb=a$. This is the same as $bx=a$, but since matrix multiplication is not commutative, there are two different possible generalizations of "division" to matrices.
If $B$ is invertible, then you can form $AB^{-1}$ or $B^{-1}A$, but these are not in general the same matrix. They are the solutions to $XB=A$ and $BX=A$ respectively.
If $B$ is not invertible, then $XB=A$ and $BX=A$ may have solutions, but the solutions will not be unique. So in that situation speaking of "matrix division" is even less warranted.
For ordinary numbers $frac{a}{b}$ means the solution to the equation $xb=a$. This is the same as $bx=a$, but since matrix multiplication is not commutative, there are two different possible generalizations of "division" to matrices.
If $B$ is invertible, then you can form $AB^{-1}$ or $B^{-1}A$, but these are not in general the same matrix. They are the solutions to $XB=A$ and $BX=A$ respectively.
If $B$ is not invertible, then $XB=A$ and $BX=A$ may have solutions, but the solutions will not be unique. So in that situation speaking of "matrix division" is even less warranted.
edited Nov 3 '12 at 16:56
answered Nov 3 '12 at 16:46
Henning Makholm
237k16302537
237k16302537
add a comment |
add a comment |
There is a way to performa sort of division , but I am not sure if it is the way you are looking for. For motivation ,consider the ordinary real numbers $mathbb{R}$ . We have that for two real numbers, $x/y$ is really the same as multiplying x and $y^{-1}=1/y$. We call $y^{-1}$ the inverse of y, and note that it has the property that $yy^{-1}=1.$
The same goes for different algebraic structures. That is, for two elements x,y in this algebraic structure we define $x/y$ as $xy^{-1}$ (under some operation). Most notably, we have a notion of division in any division ring (hence the name!) . It turns out that if you consider invertible $n times n$ matrices with addition and ordinary matrix multiplication, there is a sensible way to define division since every invertible matrix has well, an inverse. So just to help you grip what an inverse is, say that you have a 2x2 matrix $$A= begin{bmatrix} a & b \ c & d end{bmatrix}.$$
The inverse of A is then given by
$$A^{-1} = dfrac{1}{(ad-bc)} begin{bmatrix} d & -b \ -c & a end{bmatrix}$$
and you should check that $AA^{-1}=E$, the identity matrix. Now, for two matrices $B$ and $A$, $B/A = BA^{-1}$.
I know this is not the most eloquent answer, but why the downvote?
– Dedalus
Apr 2 '14 at 13:02
add a comment |
There is a way to performa sort of division , but I am not sure if it is the way you are looking for. For motivation ,consider the ordinary real numbers $mathbb{R}$ . We have that for two real numbers, $x/y$ is really the same as multiplying x and $y^{-1}=1/y$. We call $y^{-1}$ the inverse of y, and note that it has the property that $yy^{-1}=1.$
The same goes for different algebraic structures. That is, for two elements x,y in this algebraic structure we define $x/y$ as $xy^{-1}$ (under some operation). Most notably, we have a notion of division in any division ring (hence the name!) . It turns out that if you consider invertible $n times n$ matrices with addition and ordinary matrix multiplication, there is a sensible way to define division since every invertible matrix has well, an inverse. So just to help you grip what an inverse is, say that you have a 2x2 matrix $$A= begin{bmatrix} a & b \ c & d end{bmatrix}.$$
The inverse of A is then given by
$$A^{-1} = dfrac{1}{(ad-bc)} begin{bmatrix} d & -b \ -c & a end{bmatrix}$$
and you should check that $AA^{-1}=E$, the identity matrix. Now, for two matrices $B$ and $A$, $B/A = BA^{-1}$.
I know this is not the most eloquent answer, but why the downvote?
– Dedalus
Apr 2 '14 at 13:02
add a comment |
There is a way to performa sort of division , but I am not sure if it is the way you are looking for. For motivation ,consider the ordinary real numbers $mathbb{R}$ . We have that for two real numbers, $x/y$ is really the same as multiplying x and $y^{-1}=1/y$. We call $y^{-1}$ the inverse of y, and note that it has the property that $yy^{-1}=1.$
The same goes for different algebraic structures. That is, for two elements x,y in this algebraic structure we define $x/y$ as $xy^{-1}$ (under some operation). Most notably, we have a notion of division in any division ring (hence the name!) . It turns out that if you consider invertible $n times n$ matrices with addition and ordinary matrix multiplication, there is a sensible way to define division since every invertible matrix has well, an inverse. So just to help you grip what an inverse is, say that you have a 2x2 matrix $$A= begin{bmatrix} a & b \ c & d end{bmatrix}.$$
The inverse of A is then given by
$$A^{-1} = dfrac{1}{(ad-bc)} begin{bmatrix} d & -b \ -c & a end{bmatrix}$$
and you should check that $AA^{-1}=E$, the identity matrix. Now, for two matrices $B$ and $A$, $B/A = BA^{-1}$.
There is a way to performa sort of division , but I am not sure if it is the way you are looking for. For motivation ,consider the ordinary real numbers $mathbb{R}$ . We have that for two real numbers, $x/y$ is really the same as multiplying x and $y^{-1}=1/y$. We call $y^{-1}$ the inverse of y, and note that it has the property that $yy^{-1}=1.$
The same goes for different algebraic structures. That is, for two elements x,y in this algebraic structure we define $x/y$ as $xy^{-1}$ (under some operation). Most notably, we have a notion of division in any division ring (hence the name!) . It turns out that if you consider invertible $n times n$ matrices with addition and ordinary matrix multiplication, there is a sensible way to define division since every invertible matrix has well, an inverse. So just to help you grip what an inverse is, say that you have a 2x2 matrix $$A= begin{bmatrix} a & b \ c & d end{bmatrix}.$$
The inverse of A is then given by
$$A^{-1} = dfrac{1}{(ad-bc)} begin{bmatrix} d & -b \ -c & a end{bmatrix}$$
and you should check that $AA^{-1}=E$, the identity matrix. Now, for two matrices $B$ and $A$, $B/A = BA^{-1}$.
answered Nov 3 '12 at 16:55
Dedalus
2,00711936
2,00711936
I know this is not the most eloquent answer, but why the downvote?
– Dedalus
Apr 2 '14 at 13:02
add a comment |
I know this is not the most eloquent answer, but why the downvote?
– Dedalus
Apr 2 '14 at 13:02
I know this is not the most eloquent answer, but why the downvote?
– Dedalus
Apr 2 '14 at 13:02
I know this is not the most eloquent answer, but why the downvote?
– Dedalus
Apr 2 '14 at 13:02
add a comment |
Normaly matrix division is defined as $frac{A}{B}=AB^{-1}$ where $B^{-1}$ stands for inverse matrix of $B$. In case when inverse doesn't exist so called pseudoinverse may be used.
add a comment |
Normaly matrix division is defined as $frac{A}{B}=AB^{-1}$ where $B^{-1}$ stands for inverse matrix of $B$. In case when inverse doesn't exist so called pseudoinverse may be used.
add a comment |
Normaly matrix division is defined as $frac{A}{B}=AB^{-1}$ where $B^{-1}$ stands for inverse matrix of $B$. In case when inverse doesn't exist so called pseudoinverse may be used.
Normaly matrix division is defined as $frac{A}{B}=AB^{-1}$ where $B^{-1}$ stands for inverse matrix of $B$. In case when inverse doesn't exist so called pseudoinverse may be used.
edited Nov 3 '12 at 17:35
Martin Sleziak
44.7k7115270
44.7k7115270
answered Nov 3 '12 at 16:45
Mykolas
697616
697616
add a comment |
add a comment |
We can say $$frac{A}{B}=Atimes B^{-1}$$ where $B^{-1}$ is inverse matrice of $B$
1
if $ cfrac AB = A times B^{-1}$ then what is $B^{-1} times A$?
– user31280
Nov 3 '12 at 18:29
1
That is an other possibility since product of matrices is not commutative.So division of matrices can not be defined in an unique way
– Adi Dani
Nov 3 '12 at 19:02
add a comment |
We can say $$frac{A}{B}=Atimes B^{-1}$$ where $B^{-1}$ is inverse matrice of $B$
1
if $ cfrac AB = A times B^{-1}$ then what is $B^{-1} times A$?
– user31280
Nov 3 '12 at 18:29
1
That is an other possibility since product of matrices is not commutative.So division of matrices can not be defined in an unique way
– Adi Dani
Nov 3 '12 at 19:02
add a comment |
We can say $$frac{A}{B}=Atimes B^{-1}$$ where $B^{-1}$ is inverse matrice of $B$
We can say $$frac{A}{B}=Atimes B^{-1}$$ where $B^{-1}$ is inverse matrice of $B$
answered Nov 3 '12 at 16:46
Adi Dani
15.3k32246
15.3k32246
1
if $ cfrac AB = A times B^{-1}$ then what is $B^{-1} times A$?
– user31280
Nov 3 '12 at 18:29
1
That is an other possibility since product of matrices is not commutative.So division of matrices can not be defined in an unique way
– Adi Dani
Nov 3 '12 at 19:02
add a comment |
1
if $ cfrac AB = A times B^{-1}$ then what is $B^{-1} times A$?
– user31280
Nov 3 '12 at 18:29
1
That is an other possibility since product of matrices is not commutative.So division of matrices can not be defined in an unique way
– Adi Dani
Nov 3 '12 at 19:02
1
1
if $ cfrac AB = A times B^{-1}$ then what is $B^{-1} times A$?
– user31280
Nov 3 '12 at 18:29
if $ cfrac AB = A times B^{-1}$ then what is $B^{-1} times A$?
– user31280
Nov 3 '12 at 18:29
1
1
That is an other possibility since product of matrices is not commutative.So division of matrices can not be defined in an unique way
– Adi Dani
Nov 3 '12 at 19:02
That is an other possibility since product of matrices is not commutative.So division of matrices can not be defined in an unique way
– Adi Dani
Nov 3 '12 at 19:02
add a comment |
There are two issues: first, that matrices have divisors of zero; second, that matrix multiplication is in general not commutative.
To give meaning to $A/B$, you need to give meaning to $I/B$ (because then $A/B=A(I/B)$. Now, no one ever writes $I/B$, people actually write $B^{-1}$. Anyway, what is $B^{-1}$? It should be a matrix such that multiplied by $B$ gives you the identity. Now, there exist nonzero matrices $C$, $B$ with $BC=0$. If $B$ had an inverse $B^{-1}$, we would have
$$
0=B^{-1}0=B^{-1}BC=C,
$$
a contradiction. So such a matrix $B$ cannot have an inverse, i.e. "$I/B$" does not make sense.
The invertible matrices are exactly those with nonzero determinant. So, if $det Bne0$, then $AB^{-1}$ does make sense.
In you case, that would be the condition $wz-yxne0$. In that case,
$$
begin{bmatrix}w&x\ y&zend{bmatrix}^{-1}=frac1{wz-yx}begin{bmatrix}z&-x\ -y&wend{bmatrix}
$$
The second issue is a non-issue, because it can be proven that, for matrices, if $B^{-1}A=I$, then $AB^{-1}=I$.
add a comment |
There are two issues: first, that matrices have divisors of zero; second, that matrix multiplication is in general not commutative.
To give meaning to $A/B$, you need to give meaning to $I/B$ (because then $A/B=A(I/B)$. Now, no one ever writes $I/B$, people actually write $B^{-1}$. Anyway, what is $B^{-1}$? It should be a matrix such that multiplied by $B$ gives you the identity. Now, there exist nonzero matrices $C$, $B$ with $BC=0$. If $B$ had an inverse $B^{-1}$, we would have
$$
0=B^{-1}0=B^{-1}BC=C,
$$
a contradiction. So such a matrix $B$ cannot have an inverse, i.e. "$I/B$" does not make sense.
The invertible matrices are exactly those with nonzero determinant. So, if $det Bne0$, then $AB^{-1}$ does make sense.
In you case, that would be the condition $wz-yxne0$. In that case,
$$
begin{bmatrix}w&x\ y&zend{bmatrix}^{-1}=frac1{wz-yx}begin{bmatrix}z&-x\ -y&wend{bmatrix}
$$
The second issue is a non-issue, because it can be proven that, for matrices, if $B^{-1}A=I$, then $AB^{-1}=I$.
add a comment |
There are two issues: first, that matrices have divisors of zero; second, that matrix multiplication is in general not commutative.
To give meaning to $A/B$, you need to give meaning to $I/B$ (because then $A/B=A(I/B)$. Now, no one ever writes $I/B$, people actually write $B^{-1}$. Anyway, what is $B^{-1}$? It should be a matrix such that multiplied by $B$ gives you the identity. Now, there exist nonzero matrices $C$, $B$ with $BC=0$. If $B$ had an inverse $B^{-1}$, we would have
$$
0=B^{-1}0=B^{-1}BC=C,
$$
a contradiction. So such a matrix $B$ cannot have an inverse, i.e. "$I/B$" does not make sense.
The invertible matrices are exactly those with nonzero determinant. So, if $det Bne0$, then $AB^{-1}$ does make sense.
In you case, that would be the condition $wz-yxne0$. In that case,
$$
begin{bmatrix}w&x\ y&zend{bmatrix}^{-1}=frac1{wz-yx}begin{bmatrix}z&-x\ -y&wend{bmatrix}
$$
The second issue is a non-issue, because it can be proven that, for matrices, if $B^{-1}A=I$, then $AB^{-1}=I$.
There are two issues: first, that matrices have divisors of zero; second, that matrix multiplication is in general not commutative.
To give meaning to $A/B$, you need to give meaning to $I/B$ (because then $A/B=A(I/B)$. Now, no one ever writes $I/B$, people actually write $B^{-1}$. Anyway, what is $B^{-1}$? It should be a matrix such that multiplied by $B$ gives you the identity. Now, there exist nonzero matrices $C$, $B$ with $BC=0$. If $B$ had an inverse $B^{-1}$, we would have
$$
0=B^{-1}0=B^{-1}BC=C,
$$
a contradiction. So such a matrix $B$ cannot have an inverse, i.e. "$I/B$" does not make sense.
The invertible matrices are exactly those with nonzero determinant. So, if $det Bne0$, then $AB^{-1}$ does make sense.
In you case, that would be the condition $wz-yxne0$. In that case,
$$
begin{bmatrix}w&x\ y&zend{bmatrix}^{-1}=frac1{wz-yx}begin{bmatrix}z&-x\ -y&wend{bmatrix}
$$
The second issue is a non-issue, because it can be proven that, for matrices, if $B^{-1}A=I$, then $AB^{-1}=I$.
answered Nov 3 '12 at 16:50
Martin Argerami
124k1176174
124k1176174
add a comment |
add a comment |
Do you know why matrix multiplication is defined in such a weird way ? It is defined thus, so that the effect on a column vector of left-multiplying it by one matrix, and then left-multiplying the result by another matrix, is exactly the same as the effect of left-multiplying by a single matrix that is the product of those two matrices. That is, matrix multiplication corresponds to composition of linear operators. If you didn't know that already, then you should try to convince yourself that it's true, in a simple class of special cases. I would recommend trying a 2 by 1 column vector with variables for entries, and two 2 by 2 matrices, also with variables for entries. So "dividing by" a matrix would correspond to "undoing" the effect of a linear operator. For instance, rotating the plane clockwise by a certain angle "undoes" the anticlockwise rotation of the plane by the same angle. However, there are plenty of linear operators that can't be undone (because they have squashed something flat), and the matrices representing these operators (with respect to any given basis) must therefore be not-dividable-by. This is unlike the situation with the real numbers, for instance, in which there is only one of them by which you can't divide, namely zero.
add a comment |
Do you know why matrix multiplication is defined in such a weird way ? It is defined thus, so that the effect on a column vector of left-multiplying it by one matrix, and then left-multiplying the result by another matrix, is exactly the same as the effect of left-multiplying by a single matrix that is the product of those two matrices. That is, matrix multiplication corresponds to composition of linear operators. If you didn't know that already, then you should try to convince yourself that it's true, in a simple class of special cases. I would recommend trying a 2 by 1 column vector with variables for entries, and two 2 by 2 matrices, also with variables for entries. So "dividing by" a matrix would correspond to "undoing" the effect of a linear operator. For instance, rotating the plane clockwise by a certain angle "undoes" the anticlockwise rotation of the plane by the same angle. However, there are plenty of linear operators that can't be undone (because they have squashed something flat), and the matrices representing these operators (with respect to any given basis) must therefore be not-dividable-by. This is unlike the situation with the real numbers, for instance, in which there is only one of them by which you can't divide, namely zero.
add a comment |
Do you know why matrix multiplication is defined in such a weird way ? It is defined thus, so that the effect on a column vector of left-multiplying it by one matrix, and then left-multiplying the result by another matrix, is exactly the same as the effect of left-multiplying by a single matrix that is the product of those two matrices. That is, matrix multiplication corresponds to composition of linear operators. If you didn't know that already, then you should try to convince yourself that it's true, in a simple class of special cases. I would recommend trying a 2 by 1 column vector with variables for entries, and two 2 by 2 matrices, also with variables for entries. So "dividing by" a matrix would correspond to "undoing" the effect of a linear operator. For instance, rotating the plane clockwise by a certain angle "undoes" the anticlockwise rotation of the plane by the same angle. However, there are plenty of linear operators that can't be undone (because they have squashed something flat), and the matrices representing these operators (with respect to any given basis) must therefore be not-dividable-by. This is unlike the situation with the real numbers, for instance, in which there is only one of them by which you can't divide, namely zero.
Do you know why matrix multiplication is defined in such a weird way ? It is defined thus, so that the effect on a column vector of left-multiplying it by one matrix, and then left-multiplying the result by another matrix, is exactly the same as the effect of left-multiplying by a single matrix that is the product of those two matrices. That is, matrix multiplication corresponds to composition of linear operators. If you didn't know that already, then you should try to convince yourself that it's true, in a simple class of special cases. I would recommend trying a 2 by 1 column vector with variables for entries, and two 2 by 2 matrices, also with variables for entries. So "dividing by" a matrix would correspond to "undoing" the effect of a linear operator. For instance, rotating the plane clockwise by a certain angle "undoes" the anticlockwise rotation of the plane by the same angle. However, there are plenty of linear operators that can't be undone (because they have squashed something flat), and the matrices representing these operators (with respect to any given basis) must therefore be not-dividable-by. This is unlike the situation with the real numbers, for instance, in which there is only one of them by which you can't divide, namely zero.
answered Aug 17 at 18:23
Simon
628512
628512
add a comment |
add a comment |
Element-wise division is frequently used to normalize matrices in R o MatLab. For instance, in R if dtm
is a contingency matrix counting occurrences of something, you can normalize the rows so that all the rows sum 1.0 as:
dtm.proportions <- dtm / rowSums(dtm)
Or normalize the columns transposing the matrix first:
dtm.proportions <- t(t(dtm)/colSums(dtm))
add a comment |
Element-wise division is frequently used to normalize matrices in R o MatLab. For instance, in R if dtm
is a contingency matrix counting occurrences of something, you can normalize the rows so that all the rows sum 1.0 as:
dtm.proportions <- dtm / rowSums(dtm)
Or normalize the columns transposing the matrix first:
dtm.proportions <- t(t(dtm)/colSums(dtm))
add a comment |
Element-wise division is frequently used to normalize matrices in R o MatLab. For instance, in R if dtm
is a contingency matrix counting occurrences of something, you can normalize the rows so that all the rows sum 1.0 as:
dtm.proportions <- dtm / rowSums(dtm)
Or normalize the columns transposing the matrix first:
dtm.proportions <- t(t(dtm)/colSums(dtm))
Element-wise division is frequently used to normalize matrices in R o MatLab. For instance, in R if dtm
is a contingency matrix counting occurrences of something, you can normalize the rows so that all the rows sum 1.0 as:
dtm.proportions <- dtm / rowSums(dtm)
Or normalize the columns transposing the matrix first:
dtm.proportions <- t(t(dtm)/colSums(dtm))
answered Dec 2 at 9:55
Freeman
1184
1184
add a comment |
add a comment |
protected by Community♦ Apr 1 '14 at 11:51
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
6
see for example here. You'll have to distinguish between $AB^{-1}$ and $B^{-1}A$.
– Raymond Manzoni
Nov 3 '12 at 16:43
To add various types of brackets around entries of a matrix, you can use pmatrix or bmatrix or other variants. See tutorial at meta.
– Martin Sleziak
Nov 3 '12 at 16:49