Wikipedia:Reference desk/Archives/Mathematics/2008 October 2
Mathematics desk | ||
---|---|---|
< October 1 | << Sep | October | Nov >> | October 3 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
October 2
[edit]Question related to .999...
[edit]While I understood long ago that .999... = 1, I was wanting to ask a question that is implicatively the inverse of the .999... theorem: is 0.000...{infinity}...001 = 0? As an aside, I realise this question is quite trivial and simple; I ask only as a matter of interest, and extend thanks to anyone kind enough to offer me an answer. :) —Anonymous DissidentTalk 03:36, 2 October 2008 (UTC)
- I don't see how 0.000...{infinity}...001 is a meaningful expression. If it is has infinite zeros then it can't terminate in a one. Dragons flight (talk) 03:47, 2 October 2008 (UTC)
- Language peeve alert -- please, I think you mean infinitely many zeroes, not that there are zeroes that are individually infinite.
- On the content -- it depends on what you mean by "meaningful". There is nothing wrong, syntactically, with having an infinite string of zeroes (this time infinite as an adjective is fine, since it modifies string rather than zeroes), then followed by a one. However this string is not the decimal representation of any real number. --Trovatore (talk) 03:53, 2 October 2008 (UTC)
- I understand it is not real; I just was wondering if infinity could be confined in any sort of way so that we could have something on the other side. While ∞+1makes no sense, my real question was whether we could have an (∞) and then something else on the other side, in a decimal context. —Anonymous DissidentTalk 04:05, 2 October 2008 (UTC)
- Sure, why not? But it's just a string of symbols (albeit an infinitely large string). It's not a decimal representation. It doesn't denote anything (at least, not in any standard way). --Trovatore (talk) 04:12, 2 October 2008 (UTC)
- No, it's worse than that. It suggests you should construct a string of symbols that could never be written down, is logically impossible, and can't be connected to the normal meaning of such symbols. It's like saying "write the 32nd letter of the English alphabet". It's not even false, because what it attempts to refer to is non-existent. In short it's mathematical gibberish. Dragons flight (talk) 04:23, 2 October 2008 (UTC)
- Can't write down? But you can't write down, say, a string of a quintillion symbols, either. That doesn't prevent us from studying such strings.
- Logically impossible? Please provide a proof of a statement and its negation, from the assumption that such a string of symbols exists, using logic alone (that is, you are not allowed any other assumptions whatsoever). Logical impossibility is an extremely strong condition.
- There is nothing wrong whatsoever with infinite strings of symbols. They give you a good way to think about (for example) the Cantor space and the Baire space. A very nice elementary example from the theory of Borel equivalence relations is doubly-infinite words on some finite alphabet, where you forget where the origin is. None of this is "mathematical gibberish"; it's actually excellent mathematics. --Trovatore (talk) 04:37, 2 October 2008 (UTC)
- One can add definitions to mathematics, and create a system where that symbol might refer to something, but that's the same as saying I can define the 32 letter in English to be "". Yes, there are many contexts where infinite sequences exist and are useful. But can you give any examples where the infinite sequences are terminated by a finite number of elements? To say an infinite sequence is terminated is a logically impossibility for any normal definition of "infinite sequence" and "terminated". The above notation doesn't make sense within the normal framework of decimal notation, and trying to bootstrap meaning onto a post-infinite notation simply to respond to Anonymous Dissident is bad mathematics. Dragons flight (talk) 05:49, 2 October 2008 (UTC)
- Believe it or not we do this (put something at the end of an infinite sequence) all the time. You should look at the articles ordinal number and transfinite induction. I see you have an undergrad degree in math but unless you took a course specifically in math logic you likely didn't get to these. In first-year graduate real analysis you'd likely have run across them in passing, say in the construction of the Borel hierarchy.
- Now, as for it not making sense in the "normal framework of decimal notation", I said as much. I said that this infinitely long symbol doesn't, in any ordinary way, denote anything. But this is a separate issue from whether it makes sense to have an infinite sequence with something at the end -- it most definitely does make sense and is not bad mathematics. --Trovatore (talk) 07:17, 2 October 2008 (UTC)
- You can write the .00000.....1 as (0,1) instead and then you'll find it all works out quite nicely as in nonstandard arithmetic, plus it's shorter to write. That article could do with a bit of expansion. Dmcq (talk) 07:46, 2 October 2008 (UTC)
- Um, this one is not clear to me; I'm not sure what your notation means. What I was telling Anonymous Dissident was that you could make the string of symbols, not that there was any natural arithmetic to do with those strings. If you're talking about nonstandard models of arithmetic (and/or analysis) that's another kettle of fish—the positions in the decimal expansions of "reals" in the sense of those models are themselves nonstandard naturals, and some care has to be taken to make sense of it all. --Trovatore (talk) 09:34, 2 October 2008 (UTC)
- You can write the .00000.....1 as (0,1) instead and then you'll find it all works out quite nicely as in nonstandard arithmetic, plus it's shorter to write. That article could do with a bit of expansion. Dmcq (talk) 07:46, 2 October 2008 (UTC)
- One can add definitions to mathematics, and create a system where that symbol might refer to something, but that's the same as saying I can define the 32 letter in English to be "". Yes, there are many contexts where infinite sequences exist and are useful. But can you give any examples where the infinite sequences are terminated by a finite number of elements? To say an infinite sequence is terminated is a logically impossibility for any normal definition of "infinite sequence" and "terminated". The above notation doesn't make sense within the normal framework of decimal notation, and trying to bootstrap meaning onto a post-infinite notation simply to respond to Anonymous Dissident is bad mathematics. Dragons flight (talk) 05:49, 2 October 2008 (UTC)
- No, it's worse than that. It suggests you should construct a string of symbols that could never be written down, is logically impossible, and can't be connected to the normal meaning of such symbols. It's like saying "write the 32nd letter of the English alphabet". It's not even false, because what it attempts to refer to is non-existent. In short it's mathematical gibberish. Dragons flight (talk) 04:23, 2 October 2008 (UTC)
- Sure, why not? But it's just a string of symbols (albeit an infinitely large string). It's not a decimal representation. It doesn't denote anything (at least, not in any standard way). --Trovatore (talk) 04:12, 2 October 2008 (UTC)
- I understand it is not real; I just was wondering if infinity could be confined in any sort of way so that we could have something on the other side. While ∞+1makes no sense, my real question was whether we could have an (∞) and then something else on the other side, in a decimal context. —Anonymous DissidentTalk 04:05, 2 October 2008 (UTC)
- Dragons flight asked: 'can you give any examples where the infinite sequences are terminated by a finite number of elements?'
Please see the Sharkovskii's theorem, which gives an example of such infinite chain, that has a known infinite sequence as its beginning, a known infinite sequence as a tail, an infinite number of infinite sub-sequences inside itself and exhausts positive integers putting them in a total order. --CiaPan (talk) 12:29, 2 October 2008 (UTC)
- Dragons flight asked: 'can you give any examples where the infinite sequences are terminated by a finite number of elements?'
- Have you already read the article on 0.999...? While it doesn't actually use the 0.000...01 notation, the section on "Alternative number systems" covers some of the ideas of what you have to break in the real numbers to let you have a non-zero value equal to 1 - 0.999... The talk page and associated arguments page (linked from talk) go a little more into it, and the fact is that you can break out transfinite numbers or surreal numbers or something to extend decimal expansions, but you can't then just assume that they'll work like normal decimals, and you have to determine a new set of rules for working with them. Since the decimal system is designed to represent real numbers, something's gotta give. Confusing Manifestation(Say hi!) 04:17, 2 October 2008 (UTC)
- Thanks very much for that pointer to 0.999.... The bit about alternative number systems says all I wanted to say about infinitesimal calculus and I was just thinking of looking for that bit of game theory with Hackenbush where the value of one position can be greater than that of another but not by any finite amount. The introduction is good, I suppose it's okay to be dogmatic about .999 being 1 and erroneous intuitions about infinitesimals even if that really depends on using standard real numbers as defined using Dedekind cuts or suchlike. Dmcq (talk) 22:10, 2 October 2008 (UTC)
Infinite sequences with something after them
[edit]OK, this is not strictly in the spirit of the refdesk, but I'd really like to say something about this strange meme that nothing can follow an infinite sequence, or that you can't do infinitely many things and then do something else. You see this in lots of otherwise mathematically well-informed folks, and it has a pernicious effect on their mathematical reasoning, sometimes leading them away from the most natural arguments.
Take as an example the Cantor-Bendixson theorem, which says that if you have a closed set of real numbers (or closed set in some other Polish space), then you can remove countably many points and have remaining a (possibly empty) perfect set; that is, a closed set with no isolated points. What's the natural way to show this? Well, delete all the isolated points, of course (easy to see there can be only countably many such).
Unfortunately, some points that were not isolated before may have become isolated, because you've removed a sequence of points that accumulated to them. So you remove these new isolated points as well. And you keep going, if necessary.
Now, after infinitely many iterations (that is, taking the intersection of everything you got at a finite iteration), do you still have isolated points? Possibly. You could have removed one point at stage 0, another at stage 1, another at stage 2, and so on, and these accumulated to some point that becomes isolated only at this first infinite stage. No problem; we keep on going beyond that.
There are a few details to check, but it's a fact that this process must close off at some countable ordinal number. At each stage you've removed only countably many points, and the countable union of countable sets is countable, so you're done, proof complete.
This is by far the most natural and perspicacious proof of the theorem. But lots of mathematicians, perversely, prefer an all-at-once proof involving looking at the set of condensation points. of the original set. Their proof, I will concede, is not really harder. But it's much less enlightening. The only reason I can see to prefer it is if you're not comfortable with transfinite induction. But there's no excuse not to be comfortable with transfinite induction! It's an absolutely basic technique and every mathematician needs to know it. I think that this unfortunate notion that nothing can follow infinitely many things may be at the root of their discomfort. --Trovatore (talk) 19:42, 3 October 2008 (UTC)
- Interesting. But I'm not convinced the prejudice against your proof has anything to do with transfinite induction. Here's a similar example that doesn't even reach ω:
- Problem: Given n "red" points and n "blue" points, a "pairing" of the points is a set of n line segments with one red and one blue endpoint, different for each segment. Show that any set of n red and n blue points on the plane in general position has a pairing with no intersections.
- Proof 1: Start with any pairing and uncross intersecting segments (by swapping which blue point corresponds to which red) until you can't any more. This process must terminate because each uncrossing operation decreases the total length of the segments, by the triangle inequality, and there are only finitely many possible pairings.
- Proof 2: Pick a pairing of minimum total length. It can't contain any intersections because you could then get a shorter pairing by uncrossing them.
- Problem: Given n "red" points and n "blue" points, a "pairing" of the points is a set of n line segments with one red and one blue endpoint, different for each segment. Show that any set of n red and n blue points on the plane in general position has a pairing with no intersections.
- I find the first proof more intuitive, but the mathematician in me prefers the second one. I think the prejudice in your problem and mine is against proofs that invoke time and change. The answer to the problem is there in the Platonic realm, so better (the thinking goes) to grab it directly than to muck around with other objects that don't answer the question. -- BenRG (talk) 08:29, 4 October 2008 (UTC)
- But an argument in favour of Proof 1 is that it is more general because it depends only on the topological properties of the underlying space, whereas Proof 2 assumes that the underlying space is a metric space. So Proof 1 can be generalised to cases where the connections between the points are not necessarily line segments (and maybe there is not even a definition of "straight line" in the space), whereas Proof 2 cannot be so generalised. Gandalf61 (talk) 09:52, 4 October 2008 (UTC)
- I may be missing something, but I don't think Proof 1 is so easily generalized. You need to show that the uncrossing process eventually terminates, and the way that's done above uses the metric properties of the space. -- BenRG (talk) 17:03, 4 October 2008 (UTC)
- But an argument in favour of Proof 1 is that it is more general because it depends only on the topological properties of the underlying space, whereas Proof 2 assumes that the underlying space is a metric space. So Proof 1 can be generalised to cases where the connections between the points are not necessarily line segments (and maybe there is not even a definition of "straight line" in the space), whereas Proof 2 cannot be so generalised. Gandalf61 (talk) 09:52, 4 October 2008 (UTC)
So Ben, I don't really think your example is quite analogous. Your two proofs are close enough that I would call them the "same" proof; the inductive procedure in proof 1 is limited to showing that every finite partial order has a minimal element, which the second proof simply takes for granted. Other than that the two proofs are identical, unless I've missed something.
In my example, on the other hand, it's trivial to see that there's a subset of the original set that's left unchanged by one iteration; it's the empty set. But that doesn't help, because what we want to know is that we have to delete only countably many points. The all-at-once proof does have the merit of characterizing the points that are kept, but the inductive proof shows the structure of what has to be removed (and in particular the structure of countable closed sets in general).
Another class of examples of my point is the way that algebra classes, following Bourbaki, bundle all uses of the axiom of choice into the clunky and intuitively obscure Zorn's lemma, applications of which can almost always be replaced by a simple and clear transfinite induction in which you make a choice at each step. --Trovatore (talk) 19:12, 4 October 2008 (UTC)
Logic problem: Working out Wendy's birthdate
[edit]I wrote an article on Wendy Whiteley the other day, and later I started looking at what little evidence I've mustered, in order to narrow down when she was born.
I thought I'd got it to between 7 April and mid-October 1941, but when I added in the known age (17) of her future husband Brett Whiteley when they first met, and knowing exactly when he was born (7 April 1939), I was led to conclude she must have been born before 7 April 1941. It can't have been both before and after 7 April 1941. The possibility that they had the same birthday (7 April), exists, but I'm sure that coincidence would have been mentioned many times in the literature, and I've never seen any such references.
Can some kind soul take a glance over my thinking at Talk:Wendy Whiteley and tell me what logical errors I've made or what wrong assumptions I've assumed. Or, maybe I've made no logical errors, but the evidence is inherently contradictory and my analysis has simply revealed this to be the case. I'm pretty sure everything you need to know is there, but if not, just ask.
Thanks for any assistance you can provide. -- JackofOz (talk) 03:55, 2 October 2008 (UTC)
- Bear in mind that being a "round number", 'now 65' might mean 'now over 65' and thus anything from 65.0 to 69.9 -- SGBailey (talk) 07:57, 2 October 2008 (UTC)
- That's possible, but I think unlikely. The full sentence is "Standing on the balcony of the Whiteley house, gazing out at the water through the sinuous Moreton Bay fig and three cabbage tree palms that Brett so often depicted in his harbour paintings, Wendy, now 65, is, as ever, an eye-catching walking work of art herself". If she was really 67 at the time, this would be a very misleading statement. -- JackofOz (talk) 19:58, 2 October 2008 (UTC)
Continuity on the complex plane
[edit]Hi. I'm in a complex variables class, and I'm struggling with a homework problem. I'd like a hint, or a push in the right direction.
We're given that D is the domain obtained by removing the interval of the real line from the complex plane. We want to show that a branch of the cube root function on this domain is given by:
To be a branch of an inverse function, g(z) must be a right inverse of f(z) (where f(z)=z3), and it must be continuous. Showing that it's a right inverse is no problem, but the continuity is giving me trouble. My instinct tells me that, when choosing , I'll want to make it a minimum of two options, one for the line near the origin, and another for the rest of the line. I'm also pretty confident that I want to start out with:
...and then I want to bound that denominator from below, but I'm stuck right there. Can someone please throw me a bone? -GTBacchus(talk) 15:56, 2 October 2008 (UTC)
- The equation x3=z has 3 complex solutions, each of which can be called a 'cube root of z'. So your function g(z) is not really well defined. If x is one of the solutions, the other two are xa and xa2 where a=e2πi/3 is a primitive cube root of unity, (a≠1, a2≠1, a3=1). Assume that z moves continuously around the unit circle from 1 to 1. (z=eiφ, 0<φ<2π), then the root x=eiφ/3 moves continuously from 1 to a, and the root xa moves continuously from a to a2, and the root xa2 moves continuously from a2 to 1. I hope this is helpful to your understanding. Bo Jacoby (talk) 20:12, 2 October 2008 (UTC).
- I probably should have said that, in the notation we've been using, is defined as , where the first factor is a real number and where denotes the principal argument of z, which is always in the interval . In other words, denotes the principal cube root of z, which satisfies . With that clarification, the function is well-defined, and I still don't know how to show that it's continuous along the negative real axis.
Thanks for pointing out the ambiguity in my notation, though. -GTBacchus(talk) 22:11, 2 October 2008 (UTC)
- I probably should have said that, in the notation we've been using, is defined as , where the first factor is a real number and where denotes the principal argument of z, which is always in the interval . In other words, denotes the principal cube root of z, which satisfies . With that clarification, the function is well-defined, and I still don't know how to show that it's continuous along the negative real axis.
- Consider a small positive angle ε.
- while
- so the principal cube root is not continuous at −1. Bo Jacoby (talk) 10:39, 3 October 2008 (UTC).
- I know the principal cube root is discontinuous at -1; the point of this problem is to define a different branch, g(z), that is continuous at -1, and which is discontinuous at 1. The function g(z) is not the principal cube root; it's the notation that is taken to mean the principal cube root. That's how the function is well-defined, because we're not using "" to mean just any old cube root. As you can see, g(z) is different from that in the bottom half-plane. It's different by an argument of 2π/3. -GTBacchus(talk) 20:38, 4 October 2008 (UTC)
- Indeed. As given, g is discontinuous along the negative axis. Moreover, it cannot be fixed by changing only the second clause of the definition: principal is continuous along the positive real axis, hence if such a variant of g were continuous, it would have a holomorphic extension to all of , which is impossible. — Emil J. 11:15, 3 October 2008 (UTC)
- Ok, apparently I'm not communicating this clearly at all. This problem comes straight out of the textbook, and I don't think it's a misprint.
I am quite well aware that the principal cube root is continuous on the positive real axis, and discontinuous on the negative real axis. This problem is about defining a different cube root function - not the principal cube root - that is continuous on the negative real axis, and discontinuous on the positive real axis. Nobody is trying to define a cube root that's holomorphic on ; I agree that's impossible. What we're doing is moving the discontinuity to the positive side, which is not impossible.
The function g(z), given above, is very, very discontinuous on the positive real axis, and it's very, very continuous on the negative real axis. Just think about plugging in a number whose principal argument is just short of π - the function g will return a number just short of π/3. Now plug in a number just below the negative real axis, so its principal argument is very close to -π. The function g, as given, will return a value slightly greater than -π/3, multiplied by , which puts us just past π/3.
The function g, as given, is equal to the principal cube root only in the top half-plane, and it's equal to a different cube root in the bottom half-plane, one chosen so that it will be continuous on the negative real axis, and the discontinuity is moved to the positive real axis. The range of g(z) is the segment of the complex plane with principal argument ranging from 0 to . It's not impossible; in fact, this is a quite standard problem in defining different branches of inverse functions. What I really want is some help with the epsilon-delta argument. Can anyone help me with that, or do I have to settle for people telling me that my textbook, my professor, and my clear understanding are wrong, and that g(z) is either not well-defined, or is the principal cube root, when I know damn well that it isn't? This is frustrating. Can someone who understands branches of inverse functions please help me with the epsilons and deltas? -GTBacchus(talk) 20:38, 4 October 2008 (UTC)
- I haven't read all this discussion, but I think that in your original definition of g, you wanted to split the definition according to the sign of the imaginary part of z, not the real part. Algebraist 22:19, 4 October 2008 (UTC)
- Holy cow; you're right. That's precisely what I meant to do. Now I feel stupid, and I'd really like to know how to prove continuity on the negative real line for the function defined:
- I apologize for screwing up so royally. Now I know why this discussion has been so incredibly frustrating. If anyone can look past my stupidity and help me, I would be very, very grateful. -GTBacchus(talk) 00:34, 5 October 2008 (UTC)
- Holy cow; you're right. That's precisely what I meant to do. Now I feel stupid, and I'd really like to know how to prove continuity on the negative real line for the function defined:
- I haven't read all this discussion, but I think that in your original definition of g, you wanted to split the definition according to the sign of the imaginary part of z, not the real part. Algebraist 22:19, 4 October 2008 (UTC)
- Ok, apparently I'm not communicating this clearly at all. This problem comes straight out of the textbook, and I don't think it's a misprint.
- Just show that for positive a. And don't call yourself stupid. Errare humanum est. Bo Jacoby (talk) 08:15, 5 October 2008 (UTC).
- That would only show that the limit is the same coming from two directions; to show continuity, we must show that it is the same from any direction. In particular, I have to show that, for every on the negative real line, and given any small , there is some so that, whenever , we have . There are examples of complex functions with limits existing when a point is approached in two, four, or any number of directions, which still fail to be continuous. I know that there is an argument beginning in the manner I suggested in my original post, and I'd very much like to find it.
I'm not stupid, but I did something stupid, and then compounded it by failing to re-read my original post and discover the error. Don't worry about my self-esteem; I know that I'm quite bright, but I've also got stupidity in me. Similarly, I'm generous, but I've got greed in me; I'm modest, but I've got pride in me; I'm patient and thoughtful, but I've got impetuousness in me. I'm okay with all of that. Thanks though, for reminding me that I'm allowed my errors. -GTBacchus(talk) 22:31, 5 October 2008 (UTC)
- That would only show that the limit is the same coming from two directions; to show continuity, we must show that it is the same from any direction. In particular, I have to show that, for every on the negative real line, and given any small , there is some so that, whenever , we have . There are examples of complex functions with limits existing when a point is approached in two, four, or any number of directions, which still fail to be continuous. I know that there is an argument beginning in the manner I suggested in my original post, and I'd very much like to find it.
- You are perfectly right. But take one step at a time. The g-function is composed of functions that are continuous except at the real axis. That proves continuity except at the real axis. The second step is the one above. The third step is to release the constraint that ε is a positive real. (For variable z and fixed z0≠0 consider z/z0−1). The fourth step is the case z0=0. Making mistakes is an unavoidable and actually desirable component of the process of learning. If you are less stupid today than yesterday, then you have learned something. Bo Jacoby (talk) 07:25, 6 October 2008 (UTC).
- Wow, this is very frustrating. I feel as if I'm being talked down to. Please drop the condescending tone. I don't feel bad about making mistakes, nor do I need reassurance about the learning process. I've been teaching mathematics for about 15 years, and learning it for about twice that long. I've made thousands and thousands of mistakes, and I'm grateful for every one of them. The mistake in this case was a fucking stupid typo (though I'm not stupid), and then a failure to scroll up and read what I'd typed. I'm grateful for that stupid mistake; thank you. (And no, I don't use words like "stupid" except when I'm among colleagues who know that we're all qualified and not really stupid. I thought this was such a context - it has been before.)(Insertion: I am sorry. I had no intention of talking down to anybody. Bo Jacoby (talk) 12:27, 7 October 2008 (UTC).)
I know that we've got continuity except on the negative real line - "step one" was done before I ever posted here. The "second step" you indicated is easy. I can do it in my sleep: When , there's nothing to do; just let go to 0. When , then we have: , and again we can just let go to 0. Ok?
That's easy because we're only comparing z0 with other numbers that have the same modulus. If I write up what I just did, my professor will ask why I'm screwing around with the easy case instead of just going straight to the general one. What you're calling "step 3" makes your "step 2" redundant and unnecessary. It's a good exercise for beginners, but this is a graduate level course. You're right that is a separate case, and I've not asked for help with that one; all I need there is to take . because the cube root is a strictly increasing function on the positive reals. That's a piece of cake.
Sigh.
Your hint about looking at may work. I don't yet see how it will give me a string of inequalities beginning with and ending with , but I'll work on that. Thank you for that hint. It's already too late to turn this in, but I'm going to work out the details anyway, for my own learning. Thanks again - I mean that. I am irritated with this interaction on other levels, but I do appreciate the math help.
I've used this resource in the past without feeling that I've been treated like a child; I'm not sure what went wrong this time. If I ask for help here in the future, I'll try to more clearly indicate my level of understanding, so I don't get handed Continuity 101 and the "it's ok to make mistakes" lecture. I hope you can understand what I find exasperating about this exchange. -GTBacchus(talk) 15:50, 6 October 2008 (UTC)
- Wow, this is very frustrating. I feel as if I'm being talked down to. Please drop the condescending tone. I don't feel bad about making mistakes, nor do I need reassurance about the learning process. I've been teaching mathematics for about 15 years, and learning it for about twice that long. I've made thousands and thousands of mistakes, and I'm grateful for every one of them. The mistake in this case was a fucking stupid typo (though I'm not stupid), and then a failure to scroll up and read what I'd typed. I'm grateful for that stupid mistake; thank you. (And no, I don't use words like "stupid" except when I'm among colleagues who know that we're all qualified and not really stupid. I thought this was such a context - it has been before.)(Insertion: I am sorry. I had no intention of talking down to anybody. Bo Jacoby (talk) 12:27, 7 October 2008 (UTC).)
- You don't need to compute any more limits this far in the game. Let be a point on the negative real line, and its circular open neighborhood avoiding the origin. Let be the unique branches of the cube root defined on such that , for , and for . All three functions are continuous by definition. Since you already proved , and by continuity, we obtain , hence . By a symmetrical argument, , hence is continuous in . — Emil J. 16:24, 6 October 2008 (UTC)
- I don't see how I can assume that there exist continuous branches of the cube root function on U. This problem is supposed to establish that one such branch exists. From the definition of a "branch of an inverse function on U" (a right inverse function that is continuous on U), it's not clear that one exists when U is an open set that straddles the negative real line.... or is it? How do we know that there exist continuous functions satisfying the conditions you describe? What am I missing?
Isn't there an epsilon-delta argument that someone can help me put together? Do we have to avoid that? I'm not married to the epsilon-delta format, but it's what I used for the square root function when I proved that it was uniformly continuous on certain domains, and it seems to be the flavor of proof that we usually use.
On a side note, the argument at z=0 doesn't have to be made, because g is defined on the domain -GTBacchus(talk) 19:47, 6 October 2008 (UTC)
- I don't see how I can assume that there exist continuous branches of the cube root function on U. This problem is supposed to establish that one such branch exists. From the definition of a "branch of an inverse function on U" (a right inverse function that is continuous on U), it's not clear that one exists when U is an open set that straddles the negative real line.... or is it? How do we know that there exist continuous functions satisfying the conditions you describe? What am I missing?
- You don't need to compute any more limits this far in the game. Let be a point on the negative real line, and its circular open neighborhood avoiding the origin. Let be the unique branches of the cube root defined on such that , for , and for . All three functions are continuous by definition. Since you already proved , and by continuity, we obtain , hence . By a symmetrical argument, , hence is continuous in . — Emil J. 16:24, 6 October 2008 (UTC)
- I may be mistaken, but isn't there some general theorem which guarantees the existence of branches of inverse functions on simply connected domains? Anyway, in this particular case, it is easy to define the branches explicitly. In fact, now that I think about it, it is possible to define directly a branch of cube root on the whole of , giving a completely limit-free argument avoiding the "step 2" above. (Sorry, I'm not in an epsilon-delta mood.) The argument is as follows.
- Define the function
- for . Obviously , and being a composition of continuous functions, h is continuous, thus it is a branch of the cube root on . Now it suffices to show that g = h. Consider such that . As , we compute easily . As , the difference is an integral multiple of , hence in fact . Also , thus . The case is similar. — Emil J. 10:16, 7 October 2008 (UTC)
- Define the function
- Not in an epsilon-delta mood, eh? No one seems to be. I guess if I want to use this problem to practice that, then I'm on my own. Whatever.
I talked to a few people yesterday, and came up with a couple of alternative strategies. The one you suggest is elegant, and works. Another option, which has a nice geometric appeal, is to define a different branch of the argument function, say arg(z) (lower-case 'a') where the range is the interval [0,2π), and establish that it is continuous. Then we could define , and show that g=h.
Anyway, I suspect this horse is dead, and I thank everyone for their help. -GTBacchus(talk) 14:50, 7 October 2008 (UTC)
- Oh, and that theorem about branches on simply connected domains may exist, but we haven't got it yet in our class, so I probably shouldn't use it. -GTBacchus(talk) 15:12, 7 October 2008 (UTC)
Expressions
[edit]Hi. I've wondered for a long time, but never bothered asking anyone, is whether or not the expression is always equal to 1 or is always equal to x. My basic algebraic skills are quite strong and I can easily simplify the afore mentioned expressions but my problem lies in what happens to these expressions, as well as others of a similar nature, when you let x equal zero. Can anyone shed some light on the matter please? Thanks 92.3.238.32 (talk) 16:28, 2 October 2008 (UTC)
- When x equals 0 they are formally undefined. However, for a particular application, one can define them to be whatever is convenient or useful. Typically this would be their respective limits as x approached 0. Thus in your cases, it could be reasonable to define them as 1 and 0 respectively, although note that that is merely a convention you would choose to maintain. In other words, you cannot prove it, unless you did so circularly without realizing it. Baccyak4H (Yak!) 16:47, 2 October 2008 (UTC)
- These expressions have a removable singularity at . (Unfortunately, that article as currently written makes little sense to anyone not already familiar with complex analysis.) —Ilmari Karonen (talk) 20:31, 2 October 2008 (UTC)
- So to avoid having to learn complex analysis, the graph would look just like except there would be an open dot at (0,1) which would not be solution of the equation. would act like with a discontinuity at(0,0). Paragon12321 02:59, 3 October 2008 (UTC)
- To answer this question with precision we need to distinguish between formal expressions and functions. As formal expressions, is algebraically equal to because we can convert the first expression into the second following the formal rules of algebra - in this context, is a formal variable and takes no values. Similarly the expression is also algebraically equal to the expression . However, once we start to talk about specific values of , we enter the realm of functions. And as functions, , and are three different functions of - the first is defined everywhere, the second is not defined at 0 and the third is not defined at 1. Gandalf61 (talk) 09:36, 3 October 2008 (UTC)
Find an analytic function f(z) such that f(f(z)) = exp(z)
[edit]Count Iblis (talk) 23:11, 2 October 2008 (UTC)
- I asked this a few years back. Wikipedia:Reference desk archive/Mathematics/February 2006#"square root" of exponential function. I asked on the newsgroup sci.math and sci.math.research, also. — Arthur Rubin (talk) 23:43, 2 October 2008 (UTC)
- [1] (Google groups, 1992) states that
- C Horowitz "Iterated Logarithms of Entire Functions" ISRAELI JOURNAL OF MATHEMATICS, v29, n1, pp. 31-42 (1978).
- provides an entire solution of h(z+1)=exp(h(z)), meaning that f(z)=h(h-1(z)+.5) will work. I've never actually checked it out, though. — Arthur Rubin (talk) 00:28, 3 October 2008 (UTC)
- That analysis seems to assume that "h" is 1-1, which is clearly not possible for an entire function. Sorry about that. It appears I wasn't paying attention to the answer in 1992, either. — Arthur Rubin (talk) 00:34, 3 October 2008 (UTC)
- There must be hundreds of people who've spent odd moments on this, it really deserves its own article. I came to the conclusion years ago no analytic solution was possible though I'm willing to be proved wrong as I've not read anything about it - it seemed more the sort of thing to just mess around with. An interesting path it led me to was considering things like (x+f(x)/n) iterated n times where n tends to infinity and mixing two such functions up in various ways. But I bet there's probably been named and got papers on it too by now. Dmcq (talk) 08:57, 3 October 2008 (UTC)
- Arthur, Thanks anyway, that article looks interesting! Dmcq, so, perhaps we need to think in terms of conformal algebra. The question is then: what is the generator of the exponential function? Count Iblis (talk) 15:50, 3 October 2008 (UTC)
- We have an article called half iterate that seems to be about this, although it has almost no content. --128.97.245.105 (talk) 21:18, 3 October 2008 (UTC)
- We also have a tiny article functional square root. I can't currently see a reason to keep both. Algebraist 01:22, 4 October 2008 (UTC)
- They look like exactly the same thing to me. The latter is the more intuitive name, so I'd merge the former into it (I really should be packing, though, so I won't do it now!). --Tango (talk) 12:19, 4 October 2008 (UTC)
- Merged! I don't think I missed anything of substance (I dropped one red link); if someone wants to double-check me, cool. -GTBacchus(talk) 15:02, 7 October 2008 (UTC)
- They look like exactly the same thing to me. The latter is the more intuitive name, so I'd merge the former into it (I really should be packing, though, so I won't do it now!). --Tango (talk) 12:19, 4 October 2008 (UTC)
- We also have a tiny article functional square root. I can't currently see a reason to keep both. Algebraist 01:22, 4 October 2008 (UTC)
- I just had a look on the web and I see Babbage is credited with being the first to make a study of all the solutions of fn(x) = x. Dmcq (talk) 14:18, 4 October 2008 (UTC)