Wikipedia:Reference desk/Archives/Mathematics/2009 July 8
Mathematics desk | ||
---|---|---|
< July 7 | << Jun | July | Aug >> | July 9 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
July 8
[edit]Raise a matrix to a matrix power
[edit]Can a matrix be raised to the power of another matrix? NeonMerlin 02:38, 8 July 2009 (UTC)
- I have never heard of such an operation, and I can't yet think of any useful way of giving meaning to it. Why do you ask? Algebraist 02:41, 8 July 2009 (UTC)
- Well, An obviously makes sense for a square matrix A and integer n, and there is a matrix exponential exp(A) defined as the obvious power series in A, so why not AB=exp(B log A) for some suitable "log A" that comes out the right shape? 208.70.31.206 (talk) 02:50, 8 July 2009 (UTC)
- Or AB=exp((log A)B). Yeah, I thought of that, but I can't see what use it would be. Algebraist 03:02, 8 July 2009 (UTC)
- But math isn't about being useful, especially when it comes to abstract data structures! That's why engineering is taught in a different department. NeonMerlin 03:10, 8 July 2009 (UTC)
- I think what Algebraist meant is that this concept may not have any application within mathematics. Although I agree that the purpose of mathematics is not for mere application in physics and other sciences, mathematical concepts are invented because they are interesting and shed some light on other concepts. As an example, I could define the concept of dimension of a vector space. This is interesting because vector spaces are characterized by their dimension (up to isomorphism). On the other hand, I could define another "concept" - the "product" of two vectors. Just define this product to be zero for any two vectors. This certainly makes the vector space into an algebra but is not nearly as interesting as the concept of dimension. This is not to say that the general idea of multiplying two vectors is not interesting (is has already been defined, in fact) but that some possible products, although amusing, are not so interesting to study. On the other hand, if you can find some interesting properties that the concept of A^B satisfies, and how it applies in matrix theory, I have no doubt that it is useful. --PST 04:00, 8 July 2009 (UTC)
- But math isn't about being useful, especially when it comes to abstract data structures! That's why engineering is taught in a different department. NeonMerlin 03:10, 8 July 2009 (UTC)
- Or AB=exp((log A)B). Yeah, I thought of that, but I can't see what use it would be. Algebraist 03:02, 8 July 2009 (UTC)
- Well, talking of single functions, people usually prefer to consider a generic exponential function in the form ecx, rather than ax, because it is somehow of better use, especially for the purposes of calculus. As to the general definition of a matrix f(A) or f(A,B) &c, as function of one or more matrices, note that we do it also with operators in place of matrices, or even more generally, in abstract Banach algebras; the function f need not to be analytic, nor even continuous, but only a borel function may be sufficient, depending on the context. You may check Functional calculus. However, not every nice property of the initial function f may possibly hold for the corresponding f(A); for example, recall that in general exp(A)exp(B) is not such a simple thing as exp(A+B), if A and B do not commute. For the same reason AB is not so a nice operation as it is with numbers (Algebraist alluded to this point, if you note). --pma (talk) 06:27, 8 July 2009 (UTC)
- The log of a matrix does sound interesting, so X = log A is such that A = eX. There's all sorts of questions one can ask, for instance the complex logarithm has a number of different possible valid values. Dmcq (talk) 10:21, 8 July 2009 (UTC)
Note that there are Wikipedia articles on matrix exponentiation and logarithms. The logarithm article addresses the question Dmcq was interested in: "Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm."
I have yet to find an article discussing Neon's idea, and I certainly don't have the background to figure out what nuances might lie behind such a definition myself. I'm sure the same idea must have been discussed before somewhere, if anyone can try and hunt down an article or discussion, possibly outside Wikipedia. I'll look where I can, but others can pitch in as well. --COVIZAPIBETEFOKY (talk) 14:48, 8 July 2009 (UTC)
- Thinking about this a bit, it occurred to me that in certain circumstances, you could plausibly define a matrix logarithm to a matrix base. i.e., . Readro (talk) 15:05, 8 July 2009 (UTC)
- What is that expression supposed to mean? I can think of at least four possibilities. There's also the alternate pair of definitions . Algebraist 15:27, 8 July 2009 (UTC)
- Take the logarithm of both sides of the right-hand equality there, and you get either or , depending on your definition of . So your definitions are equivalent, and the ambiguity of Readro's definition is the exact same ambiguity as that of the definition of . --COVIZAPIBETEFOKY (talk) 14:56, 9 July 2009 (UTC)
- You're assuming log(A) is invertible (you're also ignoring multivaluedness when you take logs; I haven't bothered checking if this changes things). One might also want to interpret Readro's for A and B matrices as another potentially multivalued expression with value any C such that A=BC (or A=CB). Algebraist 15:09, 9 July 2009 (UTC)
- Let's assume for a moment that all the possible values of log(AY) match all the possible values of Y log(A) (I'm not going to keep listing the alternative log(A)Y, because I'm lazy, but I haven't forgotten about it and neither should you); I honestly don't know if this is always the case. But if it is, and if we interpret Readro's fraction the way you suggested, then your definitions yield identical equations.
- Removing that assumption, there may possibly be a logarithm of AY which is not of the form Y log(A) for some choice of logarithm of A (vice-versa is not possible, due to our definition of AY). That means that Algebraist's definition actually probably encompasses more values of logA(X) than Readro's. --COVIZAPIBETEFOKY (talk) 19:46, 9 July 2009 (UTC)
- Oh, and I should be the first to admit that, in accordance with the above, I was wrong about your definitions being equivalent. They are equivalent only if the assumption I made in the first paragraph above is valid, but I see no reason to make that assumption. --COVIZAPIBETEFOKY (talk) 19:50, 9 July 2009 (UTC)
- You're assuming log(A) is invertible (you're also ignoring multivaluedness when you take logs; I haven't bothered checking if this changes things). One might also want to interpret Readro's for A and B matrices as another potentially multivalued expression with value any C such that A=BC (or A=CB). Algebraist 15:09, 9 July 2009 (UTC)
- Take the logarithm of both sides of the right-hand equality there, and you get either or , depending on your definition of . So your definitions are equivalent, and the ambiguity of Readro's definition is the exact same ambiguity as that of the definition of . --COVIZAPIBETEFOKY (talk) 14:56, 9 July 2009 (UTC)
- Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm.... That's OK if we think a "logarithm of A" just as a matrix B in the pre-image of the map exp, that is, any B such that exp(B)=A. But then one does not speak of logarithm, one would just say: a point of the pre-image. Usually "logarithm" is used to mean a function, that is, a function that insome domain selects continuously an element in the pre-image of exp. In this sense the quoted sentence is a bit misleading: it is not that there is a function (log) having " more values" at any point, as sometimes people say. A function is a function, and has exactly one value at each point. (Yes, one can consider multifunctions, but the context where multifunction are of use is quite different from complex analysis, for multifunction have quite a poor algebra :what is sqrt(1)+sqrt(1)? The set {-2,0,2} maybe?). Let's say that exp(z), as any other complex function, has a local section at any regular point. That is, a function g:U→C such that exp(g(z))=z for all z in some domain U. The idea is very simple and geometrically clear. Each section can be extended to a maximal domain, and some call it "a determination of the logarithm". Each of these functions, as any holomorphic function, may be used in the functional calculus to define g(A), provided spec(A) is a subset of U, for any matrix A, or what else we like. --pma (talk) 18:38, 8 July 2009 (UTC)
- Personally I prefer to think in terms of covering spaces if at all possible instead of either cuts or multifunctions. Dmcq (talk) 18:16, 10 July 2009 (UTC)