If $A^2 = I$ (Identity Matrix) then $A = pm I$












17














So I'm studying linear algebra and one of the self-study exercises has a set of true or false questions. One of the question is this:




If $A^2 = I$ (Identity Matrix) Then $A = pm I$ ?




I'm pretty sure it is true but the answer say it's false. How can this be false (maybe its a typography error on the book)?










share|cite|improve this question




















  • 24




    Try $$ A = begin{pmatrix} 0 & 1 \ 1 & 0 end{pmatrix}. $$
    – Dylan Moreland
    Feb 5 '12 at 20:13








  • 1




    I'd point out that it is true if you're working with $1$-by-$1$ matrices (over $mathbb C$, or any other integral domain). But for $n geq 2$ the ring of $n$-by-$n$ matrices over any non-trivial ring is not an integral domain: this means that $(A+I)(A-I) = 0$ doesn't necessarily imply that $A + I = 0 $ or $A - I = 0$.
    – Matt
    Feb 5 '12 at 20:30












  • possible duplicate of Finding number of matrices whose square is the identity matrix
    – Jonas Meyer
    Feb 5 '12 at 20:56










  • There's an entire family of so-called involutory matrices. Look up Householder reflectors, for instance.
    – J. M. is not a mathematician
    Feb 6 '12 at 5:11










  • What book is that exercise from?
    – Rhaldryn
    Jan 22 '17 at 17:43
















17














So I'm studying linear algebra and one of the self-study exercises has a set of true or false questions. One of the question is this:




If $A^2 = I$ (Identity Matrix) Then $A = pm I$ ?




I'm pretty sure it is true but the answer say it's false. How can this be false (maybe its a typography error on the book)?










share|cite|improve this question




















  • 24




    Try $$ A = begin{pmatrix} 0 & 1 \ 1 & 0 end{pmatrix}. $$
    – Dylan Moreland
    Feb 5 '12 at 20:13








  • 1




    I'd point out that it is true if you're working with $1$-by-$1$ matrices (over $mathbb C$, or any other integral domain). But for $n geq 2$ the ring of $n$-by-$n$ matrices over any non-trivial ring is not an integral domain: this means that $(A+I)(A-I) = 0$ doesn't necessarily imply that $A + I = 0 $ or $A - I = 0$.
    – Matt
    Feb 5 '12 at 20:30












  • possible duplicate of Finding number of matrices whose square is the identity matrix
    – Jonas Meyer
    Feb 5 '12 at 20:56










  • There's an entire family of so-called involutory matrices. Look up Householder reflectors, for instance.
    – J. M. is not a mathematician
    Feb 6 '12 at 5:11










  • What book is that exercise from?
    – Rhaldryn
    Jan 22 '17 at 17:43














17












17








17


5





So I'm studying linear algebra and one of the self-study exercises has a set of true or false questions. One of the question is this:




If $A^2 = I$ (Identity Matrix) Then $A = pm I$ ?




I'm pretty sure it is true but the answer say it's false. How can this be false (maybe its a typography error on the book)?










share|cite|improve this question















So I'm studying linear algebra and one of the self-study exercises has a set of true or false questions. One of the question is this:




If $A^2 = I$ (Identity Matrix) Then $A = pm I$ ?




I'm pretty sure it is true but the answer say it's false. How can this be false (maybe its a typography error on the book)?







linear-algebra matrices examples-counterexamples






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Nov 30 at 15:19









Abcd

3,00121132




3,00121132










asked Feb 5 '12 at 20:11









Randolf Rincón Fadul

3901519




3901519








  • 24




    Try $$ A = begin{pmatrix} 0 & 1 \ 1 & 0 end{pmatrix}. $$
    – Dylan Moreland
    Feb 5 '12 at 20:13








  • 1




    I'd point out that it is true if you're working with $1$-by-$1$ matrices (over $mathbb C$, or any other integral domain). But for $n geq 2$ the ring of $n$-by-$n$ matrices over any non-trivial ring is not an integral domain: this means that $(A+I)(A-I) = 0$ doesn't necessarily imply that $A + I = 0 $ or $A - I = 0$.
    – Matt
    Feb 5 '12 at 20:30












  • possible duplicate of Finding number of matrices whose square is the identity matrix
    – Jonas Meyer
    Feb 5 '12 at 20:56










  • There's an entire family of so-called involutory matrices. Look up Householder reflectors, for instance.
    – J. M. is not a mathematician
    Feb 6 '12 at 5:11










  • What book is that exercise from?
    – Rhaldryn
    Jan 22 '17 at 17:43














  • 24




    Try $$ A = begin{pmatrix} 0 & 1 \ 1 & 0 end{pmatrix}. $$
    – Dylan Moreland
    Feb 5 '12 at 20:13








  • 1




    I'd point out that it is true if you're working with $1$-by-$1$ matrices (over $mathbb C$, or any other integral domain). But for $n geq 2$ the ring of $n$-by-$n$ matrices over any non-trivial ring is not an integral domain: this means that $(A+I)(A-I) = 0$ doesn't necessarily imply that $A + I = 0 $ or $A - I = 0$.
    – Matt
    Feb 5 '12 at 20:30












  • possible duplicate of Finding number of matrices whose square is the identity matrix
    – Jonas Meyer
    Feb 5 '12 at 20:56










  • There's an entire family of so-called involutory matrices. Look up Householder reflectors, for instance.
    – J. M. is not a mathematician
    Feb 6 '12 at 5:11










  • What book is that exercise from?
    – Rhaldryn
    Jan 22 '17 at 17:43








24




24




Try $$ A = begin{pmatrix} 0 & 1 \ 1 & 0 end{pmatrix}. $$
– Dylan Moreland
Feb 5 '12 at 20:13






Try $$ A = begin{pmatrix} 0 & 1 \ 1 & 0 end{pmatrix}. $$
– Dylan Moreland
Feb 5 '12 at 20:13






1




1




I'd point out that it is true if you're working with $1$-by-$1$ matrices (over $mathbb C$, or any other integral domain). But for $n geq 2$ the ring of $n$-by-$n$ matrices over any non-trivial ring is not an integral domain: this means that $(A+I)(A-I) = 0$ doesn't necessarily imply that $A + I = 0 $ or $A - I = 0$.
– Matt
Feb 5 '12 at 20:30






I'd point out that it is true if you're working with $1$-by-$1$ matrices (over $mathbb C$, or any other integral domain). But for $n geq 2$ the ring of $n$-by-$n$ matrices over any non-trivial ring is not an integral domain: this means that $(A+I)(A-I) = 0$ doesn't necessarily imply that $A + I = 0 $ or $A - I = 0$.
– Matt
Feb 5 '12 at 20:30














possible duplicate of Finding number of matrices whose square is the identity matrix
– Jonas Meyer
Feb 5 '12 at 20:56




possible duplicate of Finding number of matrices whose square is the identity matrix
– Jonas Meyer
Feb 5 '12 at 20:56












There's an entire family of so-called involutory matrices. Look up Householder reflectors, for instance.
– J. M. is not a mathematician
Feb 6 '12 at 5:11




There's an entire family of so-called involutory matrices. Look up Householder reflectors, for instance.
– J. M. is not a mathematician
Feb 6 '12 at 5:11












What book is that exercise from?
– Rhaldryn
Jan 22 '17 at 17:43




What book is that exercise from?
– Rhaldryn
Jan 22 '17 at 17:43










5 Answers
5






active

oldest

votes


















34














A simple counterexample is $$A = begin{bmatrix} 1 & 0 \ 0 & -1 end{bmatrix} $$ We have $A neq pm I$, but $A^{2} = I$.






share|cite|improve this answer





























    19














    In dimension $geq 2$ take the matrix that exchanges two basis vectors ("a transposition")






    share|cite|improve this answer





















    • If you want to exchange the (standard) basis vectors $e_{i}$ and $e_{j}$ ($1 leq i,j leq n$), then use the matrix $A = [m_{ij}]$ with $m_{kk} = 1, kneq i,j$, $m_{ij} = m_{ji} = 1$ and $m_{kl} = 0$ for all other values of $k$ and $l$. For example, if you want $e_2$ and $e_3$ exhanged in $mathbb{R}^{3}$, take $$A = begin{bmatrix} 1 & 0 & 0 \ 0 & 0 & 1 \ 0 & 1 & 0 end{bmatrix}$$ It is clear that such a matrix always satisfies $A^2 = I$, since applying it twice always gets you back to where you started.
      – Martin Wanvik
      Feb 5 '12 at 21:01












    • Thank you @Martin Wanvik, pretty clear explanation.
      – Randolf Rincón Fadul
      Feb 5 '12 at 21:52



















    12














    I know $2·mathbb C^2$ many counterexamples, namely



    $$A=c_1begin{pmatrix}
    0&1\
    1&0
    end{pmatrix}+c_2begin{pmatrix}
    1&0\
    0&-1
    end{pmatrix}pmsqrt{c_1^2+c_2^2pm1}begin{pmatrix}
    0&-1\
    1&0
    end{pmatrix},$$



    see Pauli Matrices $sigma_i$.



    These are all such matrices and can be written as $A=vec e· vec sigma$, where $vec e^2=pm1$.






    share|cite|improve this answer































      7














      The following matrix is a conterexample $
      A =
      left( {begin{array}{cc}
      -1 & 0 \
      0 & 1 \
      end{array} } right)
      $






      share|cite|improve this answer





























        6














        "Most" (read: diagonalizable) matrices can be viewed simply as a list of numbers -- its eigenvalues -- in the right basis. When doing arithmetic with just this matrix (or with other matrices that diagonalize in the same basis), you just do arithmetic on the eigenvalues.



        So, to find diagonalizable solutions to $A^2 = I$, we just need to write down a matrix whose eigenvalues satisfy $lambda^2 = 1$ -- and any such matrix will do.



        When thinking about matrices in this way -- as a list of independent numbers -- it makes it easy to think your way through problems like this.






        share|cite|improve this answer

















        • 1




          Every matrix satisfying $A^2=I$ is diagonalizable, because either it is $pm I$ or its minimal polynomial is $(x-1)(x+1)$. The general solution is obtained by taking all diagonal matrices with entries $pm 1$ on the diagonal and conjugating by invertible matrices.
          – Jonas Meyer
          Feb 6 '12 at 5:03








        • 2




          Jonas Meyer, this is only true if $char F ne 2$. Otherwise, there are such matrices which are not diagonalizable,
          – the L
          Feb 6 '12 at 8:18






        • 1




          @Jonas: That's a good point to mention as an appendix, but dealing properly with non-diagonalizable matrices in this fashion is somewhat more sophisticated. The only reason I mentioned the word was so that I didn't mislead Randolf into thinking this method works (unmodified) for all matrices; e.g. that the argument I gave isn't sufficient to tell us that this (or any!) equation has only diagonalizable solutions.
          – Hurkyl
          Feb 6 '12 at 9:53






        • 1




          @anonymous: Good point, e.g. $begin{bmatrix}1&1\ 0&1end{bmatrix}$. @@Hurkyl: I agree, it is best as an appendix. I appreciate your caution, but wanted to point out that your method does lead to the general solution (in the characteristic $0$ case that the OP is probably working in).
          – Jonas Meyer
          Feb 6 '12 at 15:49











        Your Answer





        StackExchange.ifUsing("editor", function () {
        return StackExchange.using("mathjaxEditing", function () {
        StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
        StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
        });
        });
        }, "mathjax-editing");

        StackExchange.ready(function() {
        var channelOptions = {
        tags: "".split(" "),
        id: "69"
        };
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function() {
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled) {
        StackExchange.using("snippets", function() {
        createEditor();
        });
        }
        else {
        createEditor();
        }
        });

        function createEditor() {
        StackExchange.prepareEditor({
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: true,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: 10,
        bindNavPrevention: true,
        postfix: "",
        imageUploader: {
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        },
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        });


        }
        });














        draft saved

        draft discarded


















        StackExchange.ready(
        function () {
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f106070%2fif-a2-i-identity-matrix-then-a-pm-i%23new-answer', 'question_page');
        }
        );

        Post as a guest















        Required, but never shown

























        5 Answers
        5






        active

        oldest

        votes








        5 Answers
        5






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        34














        A simple counterexample is $$A = begin{bmatrix} 1 & 0 \ 0 & -1 end{bmatrix} $$ We have $A neq pm I$, but $A^{2} = I$.






        share|cite|improve this answer


























          34














          A simple counterexample is $$A = begin{bmatrix} 1 & 0 \ 0 & -1 end{bmatrix} $$ We have $A neq pm I$, but $A^{2} = I$.






          share|cite|improve this answer
























            34












            34








            34






            A simple counterexample is $$A = begin{bmatrix} 1 & 0 \ 0 & -1 end{bmatrix} $$ We have $A neq pm I$, but $A^{2} = I$.






            share|cite|improve this answer












            A simple counterexample is $$A = begin{bmatrix} 1 & 0 \ 0 & -1 end{bmatrix} $$ We have $A neq pm I$, but $A^{2} = I$.







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered Feb 5 '12 at 20:15









            Martin Wanvik

            2,5621215




            2,5621215























                19














                In dimension $geq 2$ take the matrix that exchanges two basis vectors ("a transposition")






                share|cite|improve this answer





















                • If you want to exchange the (standard) basis vectors $e_{i}$ and $e_{j}$ ($1 leq i,j leq n$), then use the matrix $A = [m_{ij}]$ with $m_{kk} = 1, kneq i,j$, $m_{ij} = m_{ji} = 1$ and $m_{kl} = 0$ for all other values of $k$ and $l$. For example, if you want $e_2$ and $e_3$ exhanged in $mathbb{R}^{3}$, take $$A = begin{bmatrix} 1 & 0 & 0 \ 0 & 0 & 1 \ 0 & 1 & 0 end{bmatrix}$$ It is clear that such a matrix always satisfies $A^2 = I$, since applying it twice always gets you back to where you started.
                  – Martin Wanvik
                  Feb 5 '12 at 21:01












                • Thank you @Martin Wanvik, pretty clear explanation.
                  – Randolf Rincón Fadul
                  Feb 5 '12 at 21:52
















                19














                In dimension $geq 2$ take the matrix that exchanges two basis vectors ("a transposition")






                share|cite|improve this answer





















                • If you want to exchange the (standard) basis vectors $e_{i}$ and $e_{j}$ ($1 leq i,j leq n$), then use the matrix $A = [m_{ij}]$ with $m_{kk} = 1, kneq i,j$, $m_{ij} = m_{ji} = 1$ and $m_{kl} = 0$ for all other values of $k$ and $l$. For example, if you want $e_2$ and $e_3$ exhanged in $mathbb{R}^{3}$, take $$A = begin{bmatrix} 1 & 0 & 0 \ 0 & 0 & 1 \ 0 & 1 & 0 end{bmatrix}$$ It is clear that such a matrix always satisfies $A^2 = I$, since applying it twice always gets you back to where you started.
                  – Martin Wanvik
                  Feb 5 '12 at 21:01












                • Thank you @Martin Wanvik, pretty clear explanation.
                  – Randolf Rincón Fadul
                  Feb 5 '12 at 21:52














                19












                19








                19






                In dimension $geq 2$ take the matrix that exchanges two basis vectors ("a transposition")






                share|cite|improve this answer












                In dimension $geq 2$ take the matrix that exchanges two basis vectors ("a transposition")







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Feb 5 '12 at 20:16









                Blah

                4,242915




                4,242915












                • If you want to exchange the (standard) basis vectors $e_{i}$ and $e_{j}$ ($1 leq i,j leq n$), then use the matrix $A = [m_{ij}]$ with $m_{kk} = 1, kneq i,j$, $m_{ij} = m_{ji} = 1$ and $m_{kl} = 0$ for all other values of $k$ and $l$. For example, if you want $e_2$ and $e_3$ exhanged in $mathbb{R}^{3}$, take $$A = begin{bmatrix} 1 & 0 & 0 \ 0 & 0 & 1 \ 0 & 1 & 0 end{bmatrix}$$ It is clear that such a matrix always satisfies $A^2 = I$, since applying it twice always gets you back to where you started.
                  – Martin Wanvik
                  Feb 5 '12 at 21:01












                • Thank you @Martin Wanvik, pretty clear explanation.
                  – Randolf Rincón Fadul
                  Feb 5 '12 at 21:52


















                • If you want to exchange the (standard) basis vectors $e_{i}$ and $e_{j}$ ($1 leq i,j leq n$), then use the matrix $A = [m_{ij}]$ with $m_{kk} = 1, kneq i,j$, $m_{ij} = m_{ji} = 1$ and $m_{kl} = 0$ for all other values of $k$ and $l$. For example, if you want $e_2$ and $e_3$ exhanged in $mathbb{R}^{3}$, take $$A = begin{bmatrix} 1 & 0 & 0 \ 0 & 0 & 1 \ 0 & 1 & 0 end{bmatrix}$$ It is clear that such a matrix always satisfies $A^2 = I$, since applying it twice always gets you back to where you started.
                  – Martin Wanvik
                  Feb 5 '12 at 21:01












                • Thank you @Martin Wanvik, pretty clear explanation.
                  – Randolf Rincón Fadul
                  Feb 5 '12 at 21:52
















                If you want to exchange the (standard) basis vectors $e_{i}$ and $e_{j}$ ($1 leq i,j leq n$), then use the matrix $A = [m_{ij}]$ with $m_{kk} = 1, kneq i,j$, $m_{ij} = m_{ji} = 1$ and $m_{kl} = 0$ for all other values of $k$ and $l$. For example, if you want $e_2$ and $e_3$ exhanged in $mathbb{R}^{3}$, take $$A = begin{bmatrix} 1 & 0 & 0 \ 0 & 0 & 1 \ 0 & 1 & 0 end{bmatrix}$$ It is clear that such a matrix always satisfies $A^2 = I$, since applying it twice always gets you back to where you started.
                – Martin Wanvik
                Feb 5 '12 at 21:01






                If you want to exchange the (standard) basis vectors $e_{i}$ and $e_{j}$ ($1 leq i,j leq n$), then use the matrix $A = [m_{ij}]$ with $m_{kk} = 1, kneq i,j$, $m_{ij} = m_{ji} = 1$ and $m_{kl} = 0$ for all other values of $k$ and $l$. For example, if you want $e_2$ and $e_3$ exhanged in $mathbb{R}^{3}$, take $$A = begin{bmatrix} 1 & 0 & 0 \ 0 & 0 & 1 \ 0 & 1 & 0 end{bmatrix}$$ It is clear that such a matrix always satisfies $A^2 = I$, since applying it twice always gets you back to where you started.
                – Martin Wanvik
                Feb 5 '12 at 21:01














                Thank you @Martin Wanvik, pretty clear explanation.
                – Randolf Rincón Fadul
                Feb 5 '12 at 21:52




                Thank you @Martin Wanvik, pretty clear explanation.
                – Randolf Rincón Fadul
                Feb 5 '12 at 21:52











                12














                I know $2·mathbb C^2$ many counterexamples, namely



                $$A=c_1begin{pmatrix}
                0&1\
                1&0
                end{pmatrix}+c_2begin{pmatrix}
                1&0\
                0&-1
                end{pmatrix}pmsqrt{c_1^2+c_2^2pm1}begin{pmatrix}
                0&-1\
                1&0
                end{pmatrix},$$



                see Pauli Matrices $sigma_i$.



                These are all such matrices and can be written as $A=vec e· vec sigma$, where $vec e^2=pm1$.






                share|cite|improve this answer




























                  12














                  I know $2·mathbb C^2$ many counterexamples, namely



                  $$A=c_1begin{pmatrix}
                  0&1\
                  1&0
                  end{pmatrix}+c_2begin{pmatrix}
                  1&0\
                  0&-1
                  end{pmatrix}pmsqrt{c_1^2+c_2^2pm1}begin{pmatrix}
                  0&-1\
                  1&0
                  end{pmatrix},$$



                  see Pauli Matrices $sigma_i$.



                  These are all such matrices and can be written as $A=vec e· vec sigma$, where $vec e^2=pm1$.






                  share|cite|improve this answer


























                    12












                    12








                    12






                    I know $2·mathbb C^2$ many counterexamples, namely



                    $$A=c_1begin{pmatrix}
                    0&1\
                    1&0
                    end{pmatrix}+c_2begin{pmatrix}
                    1&0\
                    0&-1
                    end{pmatrix}pmsqrt{c_1^2+c_2^2pm1}begin{pmatrix}
                    0&-1\
                    1&0
                    end{pmatrix},$$



                    see Pauli Matrices $sigma_i$.



                    These are all such matrices and can be written as $A=vec e· vec sigma$, where $vec e^2=pm1$.






                    share|cite|improve this answer














                    I know $2·mathbb C^2$ many counterexamples, namely



                    $$A=c_1begin{pmatrix}
                    0&1\
                    1&0
                    end{pmatrix}+c_2begin{pmatrix}
                    1&0\
                    0&-1
                    end{pmatrix}pmsqrt{c_1^2+c_2^2pm1}begin{pmatrix}
                    0&-1\
                    1&0
                    end{pmatrix},$$



                    see Pauli Matrices $sigma_i$.



                    These are all such matrices and can be written as $A=vec e· vec sigma$, where $vec e^2=pm1$.







                    share|cite|improve this answer














                    share|cite|improve this answer



                    share|cite|improve this answer








                    edited Feb 6 '12 at 18:40

























                    answered Feb 5 '12 at 20:57









                    Nikolaj-K

                    5,76223068




                    5,76223068























                        7














                        The following matrix is a conterexample $
                        A =
                        left( {begin{array}{cc}
                        -1 & 0 \
                        0 & 1 \
                        end{array} } right)
                        $






                        share|cite|improve this answer


























                          7














                          The following matrix is a conterexample $
                          A =
                          left( {begin{array}{cc}
                          -1 & 0 \
                          0 & 1 \
                          end{array} } right)
                          $






                          share|cite|improve this answer
























                            7












                            7








                            7






                            The following matrix is a conterexample $
                            A =
                            left( {begin{array}{cc}
                            -1 & 0 \
                            0 & 1 \
                            end{array} } right)
                            $






                            share|cite|improve this answer












                            The following matrix is a conterexample $
                            A =
                            left( {begin{array}{cc}
                            -1 & 0 \
                            0 & 1 \
                            end{array} } right)
                            $







                            share|cite|improve this answer












                            share|cite|improve this answer



                            share|cite|improve this answer










                            answered Feb 5 '12 at 20:20









                            azarel

                            11.1k22431




                            11.1k22431























                                6














                                "Most" (read: diagonalizable) matrices can be viewed simply as a list of numbers -- its eigenvalues -- in the right basis. When doing arithmetic with just this matrix (or with other matrices that diagonalize in the same basis), you just do arithmetic on the eigenvalues.



                                So, to find diagonalizable solutions to $A^2 = I$, we just need to write down a matrix whose eigenvalues satisfy $lambda^2 = 1$ -- and any such matrix will do.



                                When thinking about matrices in this way -- as a list of independent numbers -- it makes it easy to think your way through problems like this.






                                share|cite|improve this answer

















                                • 1




                                  Every matrix satisfying $A^2=I$ is diagonalizable, because either it is $pm I$ or its minimal polynomial is $(x-1)(x+1)$. The general solution is obtained by taking all diagonal matrices with entries $pm 1$ on the diagonal and conjugating by invertible matrices.
                                  – Jonas Meyer
                                  Feb 6 '12 at 5:03








                                • 2




                                  Jonas Meyer, this is only true if $char F ne 2$. Otherwise, there are such matrices which are not diagonalizable,
                                  – the L
                                  Feb 6 '12 at 8:18






                                • 1




                                  @Jonas: That's a good point to mention as an appendix, but dealing properly with non-diagonalizable matrices in this fashion is somewhat more sophisticated. The only reason I mentioned the word was so that I didn't mislead Randolf into thinking this method works (unmodified) for all matrices; e.g. that the argument I gave isn't sufficient to tell us that this (or any!) equation has only diagonalizable solutions.
                                  – Hurkyl
                                  Feb 6 '12 at 9:53






                                • 1




                                  @anonymous: Good point, e.g. $begin{bmatrix}1&1\ 0&1end{bmatrix}$. @@Hurkyl: I agree, it is best as an appendix. I appreciate your caution, but wanted to point out that your method does lead to the general solution (in the characteristic $0$ case that the OP is probably working in).
                                  – Jonas Meyer
                                  Feb 6 '12 at 15:49
















                                6














                                "Most" (read: diagonalizable) matrices can be viewed simply as a list of numbers -- its eigenvalues -- in the right basis. When doing arithmetic with just this matrix (or with other matrices that diagonalize in the same basis), you just do arithmetic on the eigenvalues.



                                So, to find diagonalizable solutions to $A^2 = I$, we just need to write down a matrix whose eigenvalues satisfy $lambda^2 = 1$ -- and any such matrix will do.



                                When thinking about matrices in this way -- as a list of independent numbers -- it makes it easy to think your way through problems like this.






                                share|cite|improve this answer

















                                • 1




                                  Every matrix satisfying $A^2=I$ is diagonalizable, because either it is $pm I$ or its minimal polynomial is $(x-1)(x+1)$. The general solution is obtained by taking all diagonal matrices with entries $pm 1$ on the diagonal and conjugating by invertible matrices.
                                  – Jonas Meyer
                                  Feb 6 '12 at 5:03








                                • 2




                                  Jonas Meyer, this is only true if $char F ne 2$. Otherwise, there are such matrices which are not diagonalizable,
                                  – the L
                                  Feb 6 '12 at 8:18






                                • 1




                                  @Jonas: That's a good point to mention as an appendix, but dealing properly with non-diagonalizable matrices in this fashion is somewhat more sophisticated. The only reason I mentioned the word was so that I didn't mislead Randolf into thinking this method works (unmodified) for all matrices; e.g. that the argument I gave isn't sufficient to tell us that this (or any!) equation has only diagonalizable solutions.
                                  – Hurkyl
                                  Feb 6 '12 at 9:53






                                • 1




                                  @anonymous: Good point, e.g. $begin{bmatrix}1&1\ 0&1end{bmatrix}$. @@Hurkyl: I agree, it is best as an appendix. I appreciate your caution, but wanted to point out that your method does lead to the general solution (in the characteristic $0$ case that the OP is probably working in).
                                  – Jonas Meyer
                                  Feb 6 '12 at 15:49














                                6












                                6








                                6






                                "Most" (read: diagonalizable) matrices can be viewed simply as a list of numbers -- its eigenvalues -- in the right basis. When doing arithmetic with just this matrix (or with other matrices that diagonalize in the same basis), you just do arithmetic on the eigenvalues.



                                So, to find diagonalizable solutions to $A^2 = I$, we just need to write down a matrix whose eigenvalues satisfy $lambda^2 = 1$ -- and any such matrix will do.



                                When thinking about matrices in this way -- as a list of independent numbers -- it makes it easy to think your way through problems like this.






                                share|cite|improve this answer












                                "Most" (read: diagonalizable) matrices can be viewed simply as a list of numbers -- its eigenvalues -- in the right basis. When doing arithmetic with just this matrix (or with other matrices that diagonalize in the same basis), you just do arithmetic on the eigenvalues.



                                So, to find diagonalizable solutions to $A^2 = I$, we just need to write down a matrix whose eigenvalues satisfy $lambda^2 = 1$ -- and any such matrix will do.



                                When thinking about matrices in this way -- as a list of independent numbers -- it makes it easy to think your way through problems like this.







                                share|cite|improve this answer












                                share|cite|improve this answer



                                share|cite|improve this answer










                                answered Feb 6 '12 at 4:56









                                Hurkyl

                                111k9117259




                                111k9117259








                                • 1




                                  Every matrix satisfying $A^2=I$ is diagonalizable, because either it is $pm I$ or its minimal polynomial is $(x-1)(x+1)$. The general solution is obtained by taking all diagonal matrices with entries $pm 1$ on the diagonal and conjugating by invertible matrices.
                                  – Jonas Meyer
                                  Feb 6 '12 at 5:03








                                • 2




                                  Jonas Meyer, this is only true if $char F ne 2$. Otherwise, there are such matrices which are not diagonalizable,
                                  – the L
                                  Feb 6 '12 at 8:18






                                • 1




                                  @Jonas: That's a good point to mention as an appendix, but dealing properly with non-diagonalizable matrices in this fashion is somewhat more sophisticated. The only reason I mentioned the word was so that I didn't mislead Randolf into thinking this method works (unmodified) for all matrices; e.g. that the argument I gave isn't sufficient to tell us that this (or any!) equation has only diagonalizable solutions.
                                  – Hurkyl
                                  Feb 6 '12 at 9:53






                                • 1




                                  @anonymous: Good point, e.g. $begin{bmatrix}1&1\ 0&1end{bmatrix}$. @@Hurkyl: I agree, it is best as an appendix. I appreciate your caution, but wanted to point out that your method does lead to the general solution (in the characteristic $0$ case that the OP is probably working in).
                                  – Jonas Meyer
                                  Feb 6 '12 at 15:49














                                • 1




                                  Every matrix satisfying $A^2=I$ is diagonalizable, because either it is $pm I$ or its minimal polynomial is $(x-1)(x+1)$. The general solution is obtained by taking all diagonal matrices with entries $pm 1$ on the diagonal and conjugating by invertible matrices.
                                  – Jonas Meyer
                                  Feb 6 '12 at 5:03








                                • 2




                                  Jonas Meyer, this is only true if $char F ne 2$. Otherwise, there are such matrices which are not diagonalizable,
                                  – the L
                                  Feb 6 '12 at 8:18






                                • 1




                                  @Jonas: That's a good point to mention as an appendix, but dealing properly with non-diagonalizable matrices in this fashion is somewhat more sophisticated. The only reason I mentioned the word was so that I didn't mislead Randolf into thinking this method works (unmodified) for all matrices; e.g. that the argument I gave isn't sufficient to tell us that this (or any!) equation has only diagonalizable solutions.
                                  – Hurkyl
                                  Feb 6 '12 at 9:53






                                • 1




                                  @anonymous: Good point, e.g. $begin{bmatrix}1&1\ 0&1end{bmatrix}$. @@Hurkyl: I agree, it is best as an appendix. I appreciate your caution, but wanted to point out that your method does lead to the general solution (in the characteristic $0$ case that the OP is probably working in).
                                  – Jonas Meyer
                                  Feb 6 '12 at 15:49








                                1




                                1




                                Every matrix satisfying $A^2=I$ is diagonalizable, because either it is $pm I$ or its minimal polynomial is $(x-1)(x+1)$. The general solution is obtained by taking all diagonal matrices with entries $pm 1$ on the diagonal and conjugating by invertible matrices.
                                – Jonas Meyer
                                Feb 6 '12 at 5:03






                                Every matrix satisfying $A^2=I$ is diagonalizable, because either it is $pm I$ or its minimal polynomial is $(x-1)(x+1)$. The general solution is obtained by taking all diagonal matrices with entries $pm 1$ on the diagonal and conjugating by invertible matrices.
                                – Jonas Meyer
                                Feb 6 '12 at 5:03






                                2




                                2




                                Jonas Meyer, this is only true if $char F ne 2$. Otherwise, there are such matrices which are not diagonalizable,
                                – the L
                                Feb 6 '12 at 8:18




                                Jonas Meyer, this is only true if $char F ne 2$. Otherwise, there are such matrices which are not diagonalizable,
                                – the L
                                Feb 6 '12 at 8:18




                                1




                                1




                                @Jonas: That's a good point to mention as an appendix, but dealing properly with non-diagonalizable matrices in this fashion is somewhat more sophisticated. The only reason I mentioned the word was so that I didn't mislead Randolf into thinking this method works (unmodified) for all matrices; e.g. that the argument I gave isn't sufficient to tell us that this (or any!) equation has only diagonalizable solutions.
                                – Hurkyl
                                Feb 6 '12 at 9:53




                                @Jonas: That's a good point to mention as an appendix, but dealing properly with non-diagonalizable matrices in this fashion is somewhat more sophisticated. The only reason I mentioned the word was so that I didn't mislead Randolf into thinking this method works (unmodified) for all matrices; e.g. that the argument I gave isn't sufficient to tell us that this (or any!) equation has only diagonalizable solutions.
                                – Hurkyl
                                Feb 6 '12 at 9:53




                                1




                                1




                                @anonymous: Good point, e.g. $begin{bmatrix}1&1\ 0&1end{bmatrix}$. @@Hurkyl: I agree, it is best as an appendix. I appreciate your caution, but wanted to point out that your method does lead to the general solution (in the characteristic $0$ case that the OP is probably working in).
                                – Jonas Meyer
                                Feb 6 '12 at 15:49




                                @anonymous: Good point, e.g. $begin{bmatrix}1&1\ 0&1end{bmatrix}$. @@Hurkyl: I agree, it is best as an appendix. I appreciate your caution, but wanted to point out that your method does lead to the general solution (in the characteristic $0$ case that the OP is probably working in).
                                – Jonas Meyer
                                Feb 6 '12 at 15:49


















                                draft saved

                                draft discarded




















































                                Thanks for contributing an answer to Mathematics Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                Use MathJax to format equations. MathJax reference.


                                To learn more, see our tips on writing great answers.





                                Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                                Please pay close attention to the following guidance:


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function () {
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f106070%2fif-a2-i-identity-matrix-then-a-pm-i%23new-answer', 'question_page');
                                }
                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Berounka

                                Sphinx de Gizeh

                                Different font size/position of beamer's navigation symbols template's content depending on regular/plain...