Wikipedia:Reference desk/Archives/Mathematics/2011 March 4
Mathematics desk | ||
---|---|---|
< March 3 | << Feb | March | Apr >> | March 5 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
March 4
[edit]Half way between infinities
[edit]While swimming this evening, I was thinking about countability (hey, it's a long pool). In particular, I was thinking about the proposition "for some arbitrary integer x, there are as many integers that are greater than x than there are less than x". I figure by Cantor's counting argument, this is true (there's an obvious bijective mapping from x+n to x-n for any integer n>0). Is my logic sound here? Secondly, I figure this does extend to real numbers (where x is any real number and n is any real number >0), because that bijection is still valid. Is my logic sound there also? And lastly, I figured that this was only true when splitting the domain in "half" (yes, I realise half an infinity is still the same infinity); I don't think (either for integers or reals) that one can say there are twice as many numbers > x than < x ; because this would be to try to map x+n and x+2n to x-n, which isn't a bijection. Is this last proposition true as well? Thanks. 87.112.70.245 (talk) 21:59, 4 March 2011 (UTC)
- Your first two bits of logic are perfectly sound. The last bit isn't. Just because one function doesn't work doesn't mean there isn't another function that does. It's a slightly meaningless question, really, since twice infinity is just infinity, but we can come up with a definition of "twice as many" which seems consistent with our intuitive understanding of the concept and makes mathematical sense. What we want is a function from f:A->B such that, for every b in B, there are precisely two elements of A that map to b. That's easy to achieve.
- Without loss of generality, I'll take A to be all positive numbers and B to be all negative numbers (you can easily shift everything by n to get the split to be centred on n). Then let f(x)=-(floor(x/2)+frac(x)). (Where floor(x) is the largest integer less than x and frac(x) is the fractional part of x.) That function has the desired property. Thus, there are twice as many positive numbers as negative numbers. You can change the 2 in the definition to any integer, n, to prove that there are n times as many positive numbers as negative numbers (including n=1, which is the case you've already covered). --Tango (talk) 22:33, 4 March 2011 (UTC)
- (edit conflict) Firstly, for a fixed integer n, the sets L := { m ∈ Z : m < n} and G := { m ∈ Z : m > n} are both countably infinite sets. The bijection that you mention would be given by φ(n – m) := n + m for all m ≥ 1; so your logic is correct. Again, for a fixed x, the map ψ(x – y) := x + y for all y > 0 is a bijection from to ; so your logic is correct. Finally, the notion of twice as many doesn't make sense for infinite sets. You seem to have a good grasp of the basics, so you need to start thinking about cardinality. This tries to explain different infinities. For example, there are infinitely many integers, and infinitely many real numbers; but surely there are "more" real numbers than integers. That's what cardinality tries to address. Maybe take a look at Hilbert's hotel too. — Fly by Night (talk) 22:43, 4 March 2011 (UTC)
- I think it is up to temperament whether "twice as many" is meaningless or merely useless for infinite sets. It does make some sense, except that the sense it makes happens to be the same as "once as many". This is not only true for countable infinities, but also (at least under the axiom of choice) for higher infinities. For example, the reals R can be said to be twice as large as R itself, by the relation which assigns each y to two different x's. (There's some minor trouble at x=0; patching that up is left as an exercise for the reader). –Henning Makholm (talk) 00:00, 5 March 2011 (UTC)
- That's an interesting philosophical question: does something without a meaning have a use? — Fly by Night (talk) 00:32, 5 March 2011 (UTC)
- Historically, complex numbers once had no meaning but nevertheless had use. Bo Jacoby (talk) 14:10, 8 March 2011 (UTC).
- That's an interesting philosophical question: does something without a meaning have a use? — Fly by Night (talk) 00:32, 5 March 2011 (UTC)
- Phrased in another way, there are as many integers greater than x as there are less than x, because they can be paired this way:
x-1 x-2 x-3 x-4 x-5 x-6 x-7 x-8 … x+1 x+2 x+3 x+4 x+5 x+6 x+7 x+8 …
- But there are also twice as many integers greater than x than there are lesser than x, because you can pair two of the first kind to each one of the second kind this way:
x-1 x-2 x-3 x-4 … x+1 x+2 x+3 x+4 x+5 x+6 x+7 x+8 …
Conjugate Variables
[edit]I was watching some lectures on quantum mechanics on YouTube. (I admit it: I have no life!) The statement of Heisenberg's uncertainty principle says that, for conjugate variables x and y, we have (Δx)(Δy) ≥ ℏ/2. The article on conjugate variables says that "In mathematical terms, conjugate variables are part of a symplectic basis, and the uncertainty principle corresponds to the symplectic form." Now, I know about symplectic manifolds, isotropic and Lagrangian submanifolds, etc. But I don't understand how the uncertainty principle relates to the symplectic form. Neither the conjugate variables article, nor the uncertainty principle article shed any light.
- Can someone explain the definition of conjugate variables? (I know they are Fourier transform duals, but please explain.)
- Can someone tell me how the uncertainty principle relates to a symplectic form?
- If the uncertainty principle relates to a symplectic form then what's the underlying symplectic manifold?
Does anyone have any ideas? — Fly by Night (talk) 22:18, 4 March 2011 (UTC)
- Heisenberg group#On symplectic vector spaces mentions "symplectic" and (apparently the appropriate sense of) "conjugate" close to each other, and describes a setting where a commutator is the same as a symplectic (i.e. skew-symmetric nondegenerate) inner product. It doesn't quite draw the connection to physics, but remember the canonical commutation relation , where and are are conjugate operators (which some authors consider the ultimate reason for the uncertainty principle). Since non-conjugate observables commute, this looks a lot like from the Heisenberg group article. (It should not, in hindsight, be surprising that symplectic things enter the picture, since symplectic forms are all about skew-symmetry, and commutators are the standard way in algebra to construct something skew-symmetric). –Henning Makholm (talk) 23:34, 4 March 2011 (UTC)
- (Note that the above comment is not intended as a reply to can somebody explain, but merely to does anyone have any ideas? I too would be most interested in an answer to the former ...) –Henning Makholm (talk) 23:51, 4 March 2011 (UTC)
- I agree. My head almost imploded at one point. I asked about conjugate variables and was greeted with conjugate operators! Thanks for the link to Heisenberg groups; I hadn't seen that before. There must be a straightforward explanation to all of this. I might post on the science reference desk instead; what do you think? — Fly by Night (talk) 00:15, 5 March 2011 (UTC)
- (Note that the above comment is not intended as a reply to can somebody explain, but merely to does anyone have any ideas? I too would be most interested in an answer to the former ...) –Henning Makholm (talk) 23:51, 4 March 2011 (UTC)
One thing in the lectures was that for two variables a and b (not necessarily conjugate) we have the following:
where [−,−] is the commutator and the angled brackets relate, I think, to a mean with respect to some probability distribution. The Poisson bracket was mentioned too. I think that two conjugate variables, say x and y, satisfy the relation {x,y} = 1, but I'm not sure. I'd really like to understand this. I'm familiar with all of the content in a mathematical setting; but I can't see how it all fits together. — Fly by Night (talk) 00:25, 5 March 2011 (UTC)
- Our article on conjugate variables says that the uncertainty principle corresponds to the symplectic form, and refers the reader to the mathworld article as a reference for this statement. I'm not sure that this is a meaningful statement, and the article referred to does not discuss any connection between the two notions. It should probably be removed. From my perspective, a more relevant thing to consider here is the Fourier transform#Uncertainty principle: that a function and its Fourier transform cannot be arbitrarily concentrated into a neighborhood (with a quantitative result). The position and momentum operators act on the Hilbert space of states, and are Fourier conjugates of each other, so this implies the classical Heisenberg uncertainty principle. Sławomir Biały (talk) 13:43, 5 March 2011 (UTC)
- Here by "Fourier conjugates", I mean that
- up to constants. Sławomir Biały (talk) 13:49, 5 March 2011 (UTC)
- If we're absolutely determined to make sense of the "corresponds to the symplectic form" statement, then first note that any complex Hilbert space has a canonical symplectic form
- If A and B are selfadjoint operators, then
- and the discussion in Heisenberg uncertainty principle#Mathematical derivations becomes relevant. Sławomir Biały (talk) 14:18, 5 March 2011 (UTC)
- When you write , what do you mean? Do you mean from the bra-ket notation, where and , or do you mean something else? — Fly by Night (talk) 15:13, 5 March 2011 (UTC)
- It's the inner product on the Hilbert space (which is the same thing that the bra-ket notation denotes). Sławomir Biały (talk) 15:17, 5 March 2011 (UTC)
- Great, thanks Sławomir. I only asked because the angled brackets are used for other things in this theory, and I wanted to make sure I understood you perfectly. — Fly by Night (talk) 15:53, 5 March 2011 (UTC)
- It's the inner product on the Hilbert space (which is the same thing that the bra-ket notation denotes). Sławomir Biały (talk) 15:17, 5 March 2011 (UTC)
- When you write , what do you mean? Do you mean from the bra-ket notation, where and , or do you mean something else? — Fly by Night (talk) 15:13, 5 March 2011 (UTC)
Regarding "mean with respect to some probability distribution", it's a bit more complicated than that. Here's what I've got: Quantum theory in general can be formulated in terms of an abstract complex Hilbert space of "states". What exactly these states are, concretely, varies with the situation we're modeling. In the usual Schrödinger picture the states are wavefunctions, but they can also be just finite-dimensional complex vectors (in discrete systems) or something quite wild (as in non-pertubative quantum field theory). What's important for the present purposes is that they form a Hilbert space. Usually variables ranging over states are named something like or , and the inner product is notated with a bar: .
The magnitude of the states are not physically meaningful: for the states and are physically indistinguisable. The additive structure of the Hilbert space is important inside the theory, so we cannot just quotient it away once and for all, but one sometimes assumes that the state of one's entire experiment is normalized to (which still leaves a choice of "phase", multiplication with a on the unit circle).
It is an axiom of the theory that possible measurement that we can possibly do on the system and get a real number must be represented by some Hermitian form in the following sense: Measurements in quantum physics are always probabilistic, but the expected value of the measured result when we start from state is . Such a form is equivalent to a self-adjoint operator on the Hilbert space: . In Dirac notation we write to emphasize the symmetry of being self-adjoint. It turns out to be most convenient to work with the operators rather than the forms; when physicists speak of "variables" they usually mean these forms.
The set of self-adjoint operators on a complex Hilbert space form a real vector space. It's not easy to give them an associative multiplication, except for operators that happen to commute, in which case their ordinary product preserves self-adjointness. Scalar multiples of the identity operator of course commute with everything, and are usually identified with the scalars themselves, so one can seemingly add a scalar to an operator without multiplying it with first. (The self-adjoint operators are, however, closed under the operation , which makes them into a Lie algebra. The bracket is used to mean just without the factor of , though).
When some is implicit, we can write simply for . If is any self-adjoint operator, is also one, and is the statistical variance when observing (for reasons that I can almost but not quite explain succinctly); its square root is the standard deviation .
This should give enough to interpret
My understanding is that conjugate variables are generally defined to be ones such that . In the Schrödinger case (where states are wavefunctions), this happens to be true when (a conjugation (!) in the algebra of linear operators on the Hilbert space, and with a normalization factor involving stuck in somewhere), but I don't know whether that is a necessary condition, or if it has a clear parallel in non-Schrödinger state spaces.
The question is, does this take us all the way to something involving a symplectic form? There's a skew-symmetric commutator bracket right there, but it is not in general a form, because it produces another operator rather than a scalar. Hmm, Sławomir seems to have answered that. (It is also not clear to me which operators have a conjugate partner in the general case). –Henning Makholm (talk) 15:25, 5 March 2011 (UTC)
- Stone–von Neumann theorem also looks very relevant here (and mentions in passing that observables on a discrete system cannot have conjugate partners). –Henning Makholm (talk) 16:46, 5 March 2011 (UTC)