Jump to content

Talk:Wave function collapse/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 2

(No-heading talk sections with intial contribs made before addition of the standard article-talk lead section)

(Content re measurement)

I'm currently interested in sorting out the various problems with Measurement in quantum mechanics; as part of this, I suggest swapping over some content between this article and that, specifically making this page (with its more technical title) a repository for the mathematical details; see Talk:Measurement in quantum mechanics for more. Bth 19:00, 14 Oct 2004 (UTC)

   Whatever was then at Talk:Measurement in quantum mechanics was archived about 3 years later. There should be adequate instructions in the lead section of that talk page for quickly locating the "more" mentioned in '04.
--Jerzyt 08:46, 8 February 2014 (UTC)

This conceptually can be combined with the article on Quantum operations. Admittedly that article is quite technical, but I eventually will put in a more expository introduction.CSTAR 20:09, 14 Oct 2004 (UTC)

   As CStar inserted horizontal-rule markup before and after their own '04 contribution, i've construed it as unrelated to what i retroactively named #(Content re measurement). Nevertheless and in light of the mere 69-minute lapse between them, any editor interested might consider 2nd-guessing me, by weighing the possibility that "This", in CStar's talk contrib, might refer to Bth's preceding talk contrib. (My retrofitted talk-formatting assumes that CStar meant "this" to refer to the accompanying article.)
--Jerzyt 08:46, 8 February 2014 (UTC)


I have removed the section on the Afshar experiment. This now has its own article, and I'm consolidating the same information which has ended up on several Quantum mechanics pages. Samboy 10:04, 21 Dec 2004 (UTC)

Two Copenhagen interpretations?

I don't believe there are two Copenhagen interpretations; one where the wave function is real and another where it is not real. Also, the word 'real' is misleading here since the wave function exists in the complex plane and has both 'real' and imaginary parts. The issue is whether the wave function has a direct physical interpretation, and as far as I know, within the CI, it does not (other than the Born Rule). I have not corrected the text (yet). I suggest the original author do so. green 65.88.65.217 06:10, 3 February 2006 (UTC)

I corrected the text discussed above. After further consideration, I agree that two CI's exist, but they were not, imo, defined clearly in the previous text. green 65.88.65.217 08:05, 3 February 2006 (UTC)

There well may be two CIs (probably more, since it so ill-defined, but whatever), with differing interpretations of what the wavefunction is, but in all such CIs the wavefunction still collapses. The significance of wavefunction collapse will be lessened by the extent to which the wavefunction is not regarded as physical, but the collapse still occurs. I suggest someone changes the article to reflect this.Michael C Price 19:56, 22 May 2006 (UTC)


The "collapse" of the wave function is really just a metaphor. A way of thinking. There is no mechanical like relationship between the wave function and the experimental data. But if you like thinking in mechanical terms you could imagine the wave function as collapsing, ie. yielding the experimental data. But if you don't require a mechanical picture then the "collapse" metaphor isn't necessary. Remember, CI was addressed to a classical audience - with fairly ingrained philosophical assumptions about causality and determinism. It's almost a hundred years since CI yet many of us are still trying to push quantum theory back into a pre 20th century context to which it never belonged in the first place. CI is an interpretation for the benefit of classical thought - not for the benefit of quantum theory. --220.101.184.56 23:53, 22 May 2006 (UTC)
Very good points all. It seems to me that CI encompasses a range of interpretations about what the wavefunction is (from just information to a physical field), but that collapse still happens. Even in classical physics collapse still happens, it just that the classical model collapse is entirely a mental readjustment of your knowledge of the world, and hence it never had importance until QM came along. I think the page needs to say this. --Michael C Price 05:53, 23 May 2006 (UTC)
Article updated as indicated. --Michael C Price 20:42, 23 May 2006 (UTC)
"The 'collapse' of the wave function is really just a metaphor. A way of thinking. There is no mechanical like relationship between the wave function and the experimental data. But if you like thinking in mechanical terms you could imagine the wave function as collapsing, ie. yielding the experimental data." This is how I viewed wavefunction collapse, but then again I haven't formally learned about it yet. I think that the article would benefit from something along these lines to be put in the intro, because after readit the existing one, I am no closeer to understanding the the event. --HantaVirus 15:15, 27 July 2006 (UTC)

Redirect added

I added a redirect from State Vector Reduction State vector reduction to here Wavefunction collapse. In the physics and philosophy literature, in my experience, "state vector reduction" is more common than "wavefunction collapse." (The quantum mechanical wavefunction is often considered as merely the components of the state vector with respect to the position basis, not the state that lives in a Hilbert space.) Probably more people are familiar with "wavefunction collapse," however, as introductory courses in quantum mechanics use this terminology. Google seems to agree. Solitonic 11:52 & 12:13, 14 January 2007 (UTC)


=

See #Substance dualism below, in case i am mistaken, and the ip-contributor 24.... actually intended to respond to Solitonic.
--Jerzyt 10:08, 8 February 2014 (UTC)

Substance dualism

I would add that if we do not limit the definition of science a priori to certain philosophical or religious pre-commitments, then substance dualism is permitted. It is not contrary to pure logic. Thus, not only can Buddhist and other Eastern traditions be seen as compatible with this interpretation of quantum mechanics, but so also are the traditional monotheistic metaphysics. Indeed, such an omnipotent, omniscient, omnipresent observer being of other substance than the universe, might eliminate certain problems of entanglement. — Preceding unsigned comment added by 02:07, 18 January 2007 (talk) 24.180.142.98

   Well, 24. ... may have previewed & been satisfied to have their contrib formatted as an untitled subsection of Solitonic's; that would imply that "this interpretation" means "the interpretation implicit in Solitonic's interp in #Redirect added [immediately above this section Jerzy set up]". On the other hand "===========" would seem to be an extravagant markup for requesting something other than a highly visible disassociation from that then-4-day-old talk section, so my guess is that 24. thinks it is the accompanying article (as it stood early in '07), and not Solitonic's discussion of it, that 24.'s own talk contrib (which i have reformatted here as an independent section) is relevant to.
--Jerzyt 10:08, 8 February 2014 (UTC)

Rename

Greetings,

On the wavefunction talk page, a consensus has been reached that the page should be renamed "wave function". Should this be extended to this page? MacGuy(contact me) 17:14, 15 February 2007 (UTC)

"Spiritual Interpretation"?

Isn't the idea of consciousness being the mechanism for wave-collapse an extension of the Copenhagen Interpretation and more of a philosophical ramification of the Copenhagen Interpretation, rather than a third category? It really seems to me as if "consciousness causes collapse" concept is being purposefully seperated from the Copenhagen interpretation page and this page and then colored with terms like "spiritual" to perhaps induce subconscious bias against the concept.

Though the concept of consciousness causing collapse is supported by "spiritualists," it nonetheless is a philosophical concept born from the Copenhagen interpretation and should be so noted.

I also am curious as to know why the "philosophical ramifications" was removed from the Copenhagen Interpretation article.—Lehel Kovach (talk) 22:38, 12 January 2008 (UTC)

Incompatible with biological Evolution?

Without an observer, the wave function would never collapse. Without wave function collapse, no observer could evolve. I've seen theories that seem to imply that time itself is a cognitive being; how could something without consciousness be evasive to a paradox forming? Either way, this seems to solidify my understanding that theoretical quantum mechanics is quasi-religious by nature.

The more I think about it, the more cockamamie the idea becomes. When does an observer collapse the wave function? It seems as if it occurs when an observer from one wave enters another, perhaps merging the two. But thinking back to the beginning of observers, it would seem as if there is one wave function, not many; as many wave functions are created by the existence of many observers.

It seems like time would continue to compress itself infinitely. On a side note, it seems time would advance at an infinite speed due to the lack of an observer. This is all just a bunch of tail chasing; an excuse for people to flaunt their intelligence and call themselves "gods" with this idea that they are shaping the universe by simply looking at it. --IronMaidenRocks (talk) 04:26, 12 January 2010 (UTC)

The idea that consciousness collapses wavefunctions is indeed rather unbelievable. But there are many other interpretations of quantum mechanics, the simplest of which is that wavefunctions don't collapse at all. --Zundark (talk) 09:23, 12 January 2010 (UTC)
An even simpler interpretation is the ensemble interpretation, in which collapse is a description of a selection of a subensemble.WMdeMuynck (talk) 10:02, 12 January 2010 (UTC)
We could all just be living in a computer simulation. Wave function collapse sounds a lot like clipping to me.--86.175.50.222 (talk) 01:23, 2 July 2010 (UTC)

"Observer"

The article should make clear what does it mean by an "observer". If there is a certain isolated system that evolves according to the Schrödinger equation, this is obviously going to change when the system interacts with something else ie: the measuring apparatus. To speak about an "observer" makes the question to seem different of what the question really is (and that can be find in any other area of physics, not just quantum mechanics): What happens when the (isolated) system that evolves (in this case) according to the Schrödinger equation, interacts with a certain measuring apparatus (say, several atoms of something)? Speaking about an "observer" obscures things. —Preceding unsigned comment added by 190.188.0.22 (talk) 22:53, 11 April 2010 (UTC)

Symmetry breaking vs. Wavefunction collapse

Any similarity? I found reading these 2 subjects separately confusing. — Preceding unsigned comment added by Mastertek (talkcontribs) 08:43, 20 October 2011 (UTC)

HS Green

Herbert S. Green, of BBGKY hierarchy fame, showed the following (below) years ago. When I posted this once on Slashdot, someone replied that this line of inquiry was being replicated by physicists who were unaware of the Green paper. A mention on Wikipedia might help this important work being forgotten:

"For many years the prescription of von Neumann, usually called the 'collapse of the wave packet', was the accepted view of how this happened. As it assumed that some processes outside quantum mechanics had to be invoked, even going so far as involving the brain of the human observer, people were not comfortable with it, although it seemed the only possible answer. The best known representation of this difficulty appears in the well-known Schrödinger's cat paradox. Bert, together with a number of others such as Wakita and Ludwig, found a much more satisfying explanation, which is basically still the received description, although nowadays in various forms. The idea was to suppose that a measuring apparatus could be of almost any form so long as it was very complicated, that is, contained a very large number (often for mathematical convenience taken to be infinite) of components such as molecules or electrons. The system being measured could be microscopic. When the two systems interact, any 'interference terms' in the state of the microscopic system become vanishingly small purely as a consequence of the size of the measuring instrument. There are, of course, many processes in nature in which a human observer is not involved – especially before homo sapiens evolved – and the von Neumann description is quite unable to say how these could happen. However with Bert's theory all one has to do is to replace the measuring apparatus by the environment to bring about the necessary disappearance of interferences. The only place where this very satisfactory explanation might run into some difficulty is in the early evolution of the universe, where there is no environment!"

http://www.science.org.au/fellows/memoirs/green.html

Disclaimer: I'm a former student of Bert's. He deserves more recognition, but the work stands up by itself. — Preceding unsigned comment added by 173.195.2.125 (talk) 16:47, 18 October 2012 (UTC)

How does Herbert S. Green's theory deal with the Delayed choice quantum eraser experimental results? Aarghdvaark (talk) 03:42, 3 December 2013 (UTC)

Multiple eigenstates with same eigenvalue

What happens when, for an observable, a value is measured that is associated with an eigenspace of dimension > 1? The mathematical description doesn't specify this case. — Preceding unsigned comment added by 94.224.49.243 (talk) 20:34, 27 October 2012 (UTC)

Introduction focuses too much on decoherence?

I am uneasy with the introduction mentioning only one explanation for collapse - decoherence. I am not a QM expert; is this the consensus of scientists today, or is this WP:UNDUE weight on one theory? Traditionally the issue of wave function collapse has been contentious, the focus of conflicting interpretations of quantum mechanics such as objective collapse theory, many-worlds interpretation, etc. Does decoherence settle the question of the different interpretations of QM, or is it just a universal feature of many interpretations that serves to make the different eigenstates unobservable, leaving open questions about the ultimate meaning of the wavefunction? --ChetvornoTALK 14:48, 25 June 2013 (UTC)

Decoherence doesn't settle the question of collapse. It only answers why interference terms diminish when a quantum system correlates with some apparatus. I.e. if is the quantum system and and is the apparatus, it explains why evolves into , but it doesn't pick out any particular eigenstate . It only permits us to assign classical probabilities to each outcome. What these probabilities mean depends on your interpretation. Copenhagen considers observables to be inherently probabilistic. Physical collapse interpretations postulate some extra physical process that eliminates all but one. Many-worlds interprets the probabilities as the chance of finding yourself in a particular world. etc. HiMyNameIsKelso (talk) 15:42, 18 March 2015 (UTC)

Comments on the determination of preferred-basis

In quantum decoherence, the whole universe is divided into a sub-system, which consists of objects being measured and the measuring apparatus, and the rest or the environment. According to this program, with appropriate influence of the environment, the reduced density matrix of the sub-system associated with einselected basis can be nearly diagonal. The quasi-diagonalization is called decoherence. To explain the emergence of classical world without introducing another evolution equation, it was assumed that when decoherence is reached, the state of the sub-system will be one of the basis functions with probability defined by the diagonal elements. This explanation, however, does not completely solve the measurement problem due to the following issues

1.Unknown meaning of the off-diagonal elements. When the sub-system is decoherent, the off-diagonal elements of the reduced density matrix are not all zero. To assign properties to a quasi-diagonal matrix, a threshold value is needed such that the probabilistic distribution of the state only occurs when the off-diagonal elements are in some sense below the threshold value. Neither the threshold value nor the exact formula to compare the off-diagonal elements against it is known.
2.Ad hoc border between the sub-system and the environment. If the sub-system is enlarged to contain more objects from the environment, the enlarged sub-system may or may not exhibit decoherence. For the former scenario, it is not known which sub-system determines the measurement results. For the latter scenario, the objects being measured in the enlarged sub-system will have averaged observable value, which contradicts the probabilistic results of the objects in the smaller sub-system. It is not known how this contradiction can be resolved.
3.Unjustified einselected basis. Proofs are needed to show that, e.g., position eigenstates are valid basis functions while quasi-position eigenstates are not.

In contrast, for a collapse theory to solve the measurement problem, solutions are needed on the dynamics of collapse, conditions for a collapse to happen, and the determination of preferred-basis. Most of the needed solutions except the values of some coefficients can be derived from the requirement of smooth evolution of wave function.

This article favors quantum decoherence over collapse model. But considering the difficulties of quantum decoherence, it is necessary to point out the advantages of collapse model. A better measurement theory may need collapse process, Schrödinger equation, and of course the decoherence resulted from Schrödinger equation. That’s why a section was recently added to state that preferred-basis can be obtained from a general collapse equation. Wavefunctionmaycollapse (talk) 03:35, 2 December 2013 (UTC)

I'm not sure 1 2 and 3 are unresolved in most of the major interpretations that posit decoherence as physical and collapse as subjective.
1. Copenhagen, Consistent histories, Ensemble, Many-Worlds, etc. all interpret off-diagonal terms as the persistence of interference terms in probability distributions. Like the wavefunction/density operator, they are not physical (according to these interpretations), and are instead part of the tools for tracking probabilities.
2 The interpretations mentioned above consider the border to be arbitrary and unphysical, and consider the entire universe to be quantum. The Many-Worlds interpretation considers the entire subsystem+environment to be described by some ontic, objective wavefunction. Copenhagen only insists observables are meaningful in the context of a system's correlation with an external environment. It would not attempt to assign meaning to observables in the context of an isolated quantum system (i.e. The entire universe). Consistent histories generalises the Copenhagen interpretation, and permits us to at least assign probabilities (in principle) to different histories, provided we only consider sets of alternative histories to be different representations of the same reality.
3 I'm not sure what you mean by valid. Decoherence doesn't permit us to identify an ontologically real basis set, but it does tell us that, when we isolate the degrees of freedom we are interested in by constructing a reduced density matrix, we will get an orthogonal basis set. HiMyNameIsKelso (talk) 14:50, 18 March 2015 (UTC)

Bias towards minority opinion!

This article is heavily biased towards the Decoherence Interpretation of quantum mechanics, which is actually a minority interpretation. Especially the claim that quantum mechanics is NOT indeterminist is the opinion of only a small minority. Actually J.S. Bell has shown, already in 1964[1], that no (local) theory that is not really indeterminist can reproduce the quantum mechanical probabilites. The following could be a neutral definition of 'collapse of the wave function':

"The term 'collapse of the wave function' or 'collapse of the wave packet' was used in the beginnings of quantum theory to indicate the fact that the result of a measurement in quantum mechanics in general is not uniquely determined by the theory. The wave function is a description of the probability density for finding the object under consideration (an electron, e.g.) in space. If that object is found by a measurement in a certain small region, then the probability to find it again is concentrated in the environment of that small region. Thus the probability density that was spread out before has ‘collapsed’, due to the measurement, into that small region. The term 'collapse of the wave function' indicates in this way the indeterminist character of quantum mechanics.”

References

  1. ^ John S. Bell, On the Einstein-Podolsky-Rosen paradox. Physics 1(1964)195–200

MiDri (talk) 09:47, 19 May 2014 (UTC)

You are right but this is the current fashion and people working in this area are dominant. The reason can be traced back to quantum computation. If it is just environmental decoherence, one day or the other we will be able to cope with it and have a working quantum computer. Otherwise, we will be in trouble and a lot of funding could be at risk. So, we are happy with such an unsatisfactory situation.--Pra1998 (talk) 09:54, 19 May 2014 (UTC)
I consider it not advisable to sacrifice reliable information for "political" reasons. That the view against indeterminism seems dominant is, as far as I can see, due to the very intense PR activity of the people who adhere that 'fringe' view!
I propose to change the last part of the article by mainly erasing some parts. My proposal:
The process of collapse
With these definitions it is easy to describe the process of collapse. For any observable, the wave function is initially some linear combination of the eigenbasis of that observable. When an external agency (an observer, experimenter) measures the observable associated with the eigenbasis , the wave function collapses from the full to just one of the basis eigenstates, , that is:
The probability of collapsing to a given eigenstate is the Born probability, . Post-measurement, other elements of the wave function vector, , have "collapsed" to zero, and .
More generally, collapse is defined for an operator with eigenbasis . If the system is in state , and is measured, the probability of collapsing the system to state (and measuring would be .
Thus the existence of two different kinds of transformation of states—according to the Schrödinger equation and the 'collapse'—is an expression of the indeterminism of quantum mechanics. Researchers who want to regain a determinist theory from quantum mechanics try to avoid the collapse. This takes forms like
  • the Consistent histories approach, self-dubbed "Copenhagen done right"
  • the Bohm interpretation
  • the Many-worlds interpretation
  • the Ensemble Interpretation
However, those approaches are considered fringe theories by most active theorists.
History and context
The concept of wavefunction collapse was introduced by Werner Heisenberg in his 1927 paper on the uncertainty principle, "Über den anschaulichen Inhalt der quantentheoretischen Kinematic und Mechanik", and incorporated into the mathematical formulation of quantum mechanics by John von Neumann, in his 1932 treatise Mathematische Grundlagen der Quantenmechanik.[16] Consistent with Heisenberg, von Neumann postulated that there were two processes of wave function change:
  • The probabilistic, non-unitary, non-local, discontinuous change brought about by observation and measurement, as outlined above.
  • The deterministic, unitary, continuous time evolution of an isolated system that obeys the Schrödinger equation (or a relativistic equivalent, i.e. the Dirac equation).
In general, quantum systems can be described as superpositions of the basis states that correspond to the quantity that is to be measured and, in the absence of measurement, evolve according to the Schrödinger equation. However, when a measurement is made, the wave function collapses—from an observer's perspective—to just one of the basis states, and the property being measured uniquely acquires the eigenvalue of that particular state, . After the collapse, the system again evolves according to the Schrödinger equation. — Preceding unsigned comment added by 92.211.50.165 (talk) 15:50, 27 November 2014 (UTC)
Consistent Histories doesn't attempt to recover determinism. Instead, it is a procedure for applying the Copenhagen interpretation to histories of closed quantum systems. It inherits the probabilistic nature of observables from the Copenhagen interpretation. The Ensemble interpretation also makes no commitment to determinism.
The problem is "Does the wavefunction collapse?" is an ambiguous question. The majority of interpretations, including the Many-Worlds interpretation, involve state reduction. They just assign different meanings to state reduction. Copenhagen, Consistent Histories, and Ensemble all attribute the reduction to the observer updating their knowledge of reality. Many-worlds attributes reduction an observer finding themselves on a particular branch of a persistent multiverse. This is why I think the table in the "interpretations of QM" article should clarify whether state reduction is physical and ontic, or epistemic. HiMyNameIsKelso (talk) 14:52, 18 March 2015 (UTC)

Bohr on collapse

As far as I know, wave-function collapse or reduction did not figure in Bohr's thinking. There is a sentence in the lead of the article that proposes that he postulates it:

When the Copenhagen interpretation was first expressed, Niels Bohr postulated wave function collapse to cut the quantum world from the classical.[1][citation needed]

References

  1. ^ Bohr, N (1928). "The quantum postulate and the recent development of atomic theory". Nature. 121: 580–590. Bibcode:1928Natur.121..580B. doi:10.1038/121580a0. {{cite journal}}: |access-date= requires |url= (help)

That sentence, with its purported "citation", is, I think, mistaken, and probably fanciful. Sad to say, just deleting the mistaken sentence would create problems for the 'logic' of the rest of the paragraph.Chjoaygame (talk) 15:51, 26 October 2014 (UTC)Chjoaygame (talk) 20:41, 27 November 2014 (UTC)

According to Don Howard,<(2004) Phil. Sci. 71: 669–682, on pp. 669–671> who is a careful scholar in this area, "Bohr's complementarity interpretation makes no mention of wave packet collapse ..." And: "Bohr never mentioned wave packet collapse." I myself haven't found evidence to refute that claim of Howard. That doesn't prove that Bohr didn't believe in collapse, but it does make it hard to say that "Niels Bohr postulated wave function collapse". By my reading, the words 'collapse' and 'reduction' are absent from the purported "citation". I think anyone who wants to sustain that Bohr postulated collapse has to provide here a good source for that claim. Absent such a source, I think the above quoted lead sentence should be deleted. Sad to say, that would destroy the apparent 'logic' of that paragraph of the lead. Since the paragraph has logic that depends on the faulty sentence, I think it right to delete the whole paragraph. If someone wants to restore the paragraph with logic that doesn't depend on the faulty sentence, of course he is free to do so. I am not saying the rest of the paragraph is all wrong, I am just saying it is currently written with faulty logic and needs to be thoroughly re-written with adequate sourcing if it is to survive.

This is relevant to the idea that the Copenhagen interpretation was agreed between Heisenberg and Bohr. It may suggest that it is inaccurate to say that the "orthodox interpretation" and the "standard interpretation" and the "Copenhagen interpretation" are all one and the same thing under three aliases? I think the "orthodox interpretation", which might be read as the 'Heisenberg interpretation' does use wave function reduction, or collapse. Heisenberg 1927 (NASA translation): "each position determination reduces the wave packet again to its original dimension λ". But that the "Copenhagen interpretation" does so is far from obvious, without evidence of the concurrence of Bohr. Yes, von Neumann in 1932 talked about collapse, but was that within the Copenhagen boundary?Chjoaygame (talk) 01:56, 28 November 2014 (UTC)Chjoaygame (talk) 10:00, 28 November 2014 (UTC)

Fixed.Chjoaygame (talk) 09:20, 29 November 2014 (UTC)

Erroneous citation to Cohen-Tannoudji in the second paragraph

In the second paragraph the citation to the work of Cohen-Tannoudji is wrong. There are many things to say here about the book:

  • it is at the undergraduate level, not at the graduate level: this is clearly stated in the "Introduction", where the authors assert that graduate students are not the readership the book is intended for ("We hope [...] that this book will also be of use to other readers such as graduate students [...]");
  • it avoids talking about "alternative interpretations" and adheres to the " "orthodox" quantum theory" ("Introduction");
  • page 22 doesn't talk about decoherence or anything directly related to this article, for what can be see;
  • "decoherence" is not indexed at the end of the book;

If it's true that decoherence is usually taught at the graduate level other references should be given, and this one removed. — Preceding unsigned comment added by 93.44.218.220 (talk) 00:04, 9 February 2015 (UTC)

I had a look through Cohen-Tannoudji, and found (in Vol. II, on p. 1343ff) a discussion of hydrogen atom decay that touches on decoherence. Was this what was originally meant? It is not an explicit discussion of collapse/decoherence... but it does cover the topic being referenced. zowie (talk) 17:03, 5 December 2016 (UTC)

sources for "collapse"

I so far have not succeeded in finding original authoritative or 'classic' sources for the term "collapse". It might accordingly seem to be an invention of latter-day armchair commentators rather than a term used by originative physicists? Can someone enlighten me about origin of this term? The term 'reduction' is used by Heisenberg and by Born.Chjoaygame (talk) 16:57, 4 December 2015 (UTC)

Weinberg, Lectures on Quantum Mechanics, p.82. There also DeWitt's essay, which is online here http://www.projects.science.uu.nl/Igg/jos/foundQM/qm_reality.pdf . Some refer to this process as a "projective measurement" (particularly in a context where it needs to be distinguished from other types of quantum measurements), because the "collapse" is mathematically the action of a projection operator on the initial state. Waleswatcher (talk) 18:01, 4 December 2015 (UTC)
Thank you for your kind note here. It doesn't, however, supply what I am looking for. In that section of those Lectures, Weinberg is acting as a latter-day armchair commentator, though, of course, elsewhere he acts as an originative physicist. I would like something if possible from before 1960, or even before 1930. I should sharpen my question: When was the term "collapse" introduced?Chjoaygame (talk) 20:03, 4 December 2015 (UTC)
I don't know, but you might try reference 8 of DeWitt's essay, linked to just above. Under the heading "Copenhagen collapse", he says it contains a selected list of papers on the topic (he probably means the Copenhagen interpretation, but presumably collapse is discussed in some of them). Waleswatcher (talk) 20:19, 4 December 2015 (UTC)
Thank you for this lead. Petersen was published in 1968, and evidently treated the term 'collapse' as a routine term of art. I didn't find him giving a source for it. On page 136, he writes "Does this transition take place in the physical realm, or is the collapse of the wave function a process in the mind of the observer?" I think that question is not well posed. Petersen must be a very bad person, for his next but one sentence reads "Many of our attempts to express the characteristics of the quantal description may be as unsatisfactory as the early attempts to express the meaning of a derivative." That naughty word "quantal"! He seems to be a serial offender; on page 139: "The tendency to characterize the quantal description in ontological terms is conspicuous also in the early phases of the Copenhagen interpretation." And "the quantal formalism does not reflect the quantum domain's structure of being". Worse, I found 62 occurrences of it in the book. Shocking!Chjoaygame (talk) 21:49, 4 December 2015 (UTC)
Von Neumann's book http://www.amazon.com/Mathematical-Foundations-Quantum-Mechanics-Neumann/dp/0691028931 was originally written (in German) in 1932. He discusses projective measurements at length, although I don't know if he uses the term "collapse". Waleswatcher (talk) 02:51, 5 December 2015 (UTC)
Thank you for this. I have looked in the English translation of von Neumann's book (and now have checked the German). My impression is that he uses neither Heisenberg's word 'reduce' nor the questioned word "collapse", nor a near substitute. As far as I have so far seen, the translator simply says there are two forms of "intervention", what the translator calls "arbitrary changes by measurement" (German: "die willkürlichen Veränderungen durch Messungen"), and what he calls "automatic changes which occur with the passage of time" (German: "die automatischen Veränderungen durch den Zeitablauf"). Personally, I wouldn't count evolution in time of an isolated system as a form of "intervention" (German: "Eingriffen"), but that word is not crucial. It is customary to imply that he thought in terms of 'collapse', but I have not seen evidence of that.Chjoaygame (talk) 03:54, 5 December 2015 (UTC)Chjoaygame (talk) 04:40, 5 December 2015 (UTC)Chjoaygame (talk) 05:26, 5 December 2015 (UTC)
As I thought it over, Bohm came to mind as a possible culprit. And indeed, the word "collapse" is introduced and appears 5 times on p. 120 of Quantum Theory (1951), with no citation of a previous source. Perhaps there is one?Chjoaygame (talk) 11:50, 5 December 2015 (UTC)
Well, in the end it's just a term. The mathematical operation von Neumann described is called "projection" - and "collapse" isn't a bad description of that operation (although "projection" is better). Anyway that's the term that's used pretty universally these days, so that's what wikipedia should use too. Waleswatcher (talk) 01:17, 6 December 2015 (UTC)
Yes, but where is the end? The drama of "collapse" supports a vast and futile industry of chatter. Heisenberg accounted for it in simple and natural terms, that more or less elude the mass of workers in that industry, because their thinking is blinkered in ways that are not customarily recognized. By failing to grasp Heisenberg's simple account, that industry seemingly endlessly obstructs that path from here to the end. So this article calmly writes such industrially sanctioned futile waffle as
The cluster of phenomena described by the expression wave function collapse is a fundamental problem in the interpretation of quantum mechanics, and is known as the measurement problem. The problem is deflected by the Copenhagen Interpretation, which postulates that this is a special characteristic of the "measurement" process. Everett's many-worlds interpretation deals with it by discarding the collapse-process, thus reformulating the relation between measurement apparatus and system in such a way that the linear laws of quantum mechanics are universally valid; that is, the only process according to which a quantum system evolves is governed by the Schrödinger equation or some relativistic equivalent.
Originating from de Broglie–Bohm theory, but no longer tied to it, is the physical process of decoherence, which causes an apparent collapse. Decoherence is also important for the consistent histories interpretation. A general description of the evolution of quantum mechanical systems is possible by using density operators and quantum operations. In this formalism (which is closely related to the C*-algebraic formalism) the collapse of the wave function corresponds to a non-unitary quantum operation.
The significance ascribed to the wave function varies from interpretation to interpretation, and varies even within an interpretation (such as the Copenhagen Interpretation). If the wave function merely encodes an observer's knowledge of the universe then the wave function collapse corresponds to the receipt of new information. This is somewhat analogous to the situation in classical physics, except that the classical "wave function" does not necessarily obey a wave equation. If the wave function is physically real, in some sense and to some extent, then the collapse of the wave function is also seen as a real process, to the same extent.
This is a pity.Chjoaygame (talk) 02:48, 6 December 2015 (UTC)
There is certainly a lot of talk about interpretations, most of which is best ignored - on that we agree. But it's not really a "vast...industry" - very few physicists pay any attention to this ("shut up and calculate"), and as for philosophers, in my opinion this question is more interesting than many they spend lots of time discussing. And in a few areas of physics research these questions actually matter, like trying to understand the quantum mechanics of the big bang. Anyway, this isn't really the right place for this discussion, unless you have a proposal for how to change the article. Waleswatcher (talk) 14:16, 6 December 2015 (UTC)
Thank you for this response. Yes, I exaggerated the size of the industry. As for the interestingness of the question. I think the answer is simple, if one thinks clearly; nothing for philosophy to puzzle over. One can start with Born's reading of the wave function, as an ingredient for a calculation of a probability. A probability necessarily refers to its appropriate sample space (populated, dare I say it, by specimens of the appropriate species). In general a different sample space will lead to a different probability. For the case of detection after the split beams have been reunited (so as to show interference), one has one sample space, described by the un-collapsed wave function. For the case of detection of one of the sub-beams before they are reunited (so as to a show component of the superposition), one has another wave function, the collapsed one. The 'change' of the wave function is a change of descriptive viewpoint, a re-conceptualization. In one viewpoint, the experiment looks at an un-intercepted superposed beam, in the other it looks at a component sub-beam that has been intercepted before superposition. Different beams, different sample spaces, different probabilities. What is the physical difference between the beams? The un-intercepted beam eventually has not suffered a deflection by the crystal set-up, and eventually has no transverse momentum added to it by that set-up. The intercepted beam has been deflected by transverse momentum added to it by the crystal set-up. Different momenta, different wave functions. Where is the mystery? Feynman's "mystery" is created in his own mind by his muddled thinking of the wave function, as an enduring physical object, rather than rationally, as Born's ingredient for a calculation of a probability with its appropriate sample space. Nothing there for philosophy to puzzle over.Chjoaygame (talk) 18:11, 6 December 2015 (UTC)Chjoaygame (talk) 21:43, 6 December 2015 (UTC)

This is all getting a bit far from the original topic of this thread, which was who used the term "collapse" first. Wavefunction collapse is a topic that raises strong opinions, and everyone has their own favorite interpretation. As Wikipedia editors our job is not to play favorites with the interpretations (much less to make up our own) but simply to report impartially (WP:NPOV) the current opinions on the subject. See WP:NOTFORUM and WP:Righting Great Wrongs. Chjoaygame, this page is for discussion of the article, not for discussion of the topic of wavefunction collapse itself or our opinions about it, please read WP:TALK. --ChetvornoTALK 20:26, 6 December 2015 (UTC)

"Physical meaning"

I just read this section and found it almost totally incomprehensible, starting with the first sentence "By use of a suitable preparative apparatus, quantum system can be prepared in a pure state with its specimens as elements of an original beam. It has an original wave function." There is no need to discuss pure states in an article on collapse of the wavefunction (mixed states collapse too, in exactly the same way), grammatically there is a missing article ("a"?), I don't know what "specimens" refers to, etc. For context I am a professor of physics that teaches QM at both the undergraduate and PhD level, and uses it every day in research. If I cannot understand the section, there is a big problem. In addition there are far too many references, and none of them support what is written (more precisely the references say perfectly clear and correct things, some of which are relevant to this article, but none seem to correspond to the text as written). Given this, I intend to delete the entire section, but I will wait for comments first. Waleswatcher (talk) 18:07, 4 December 2015 (UTC)

I absolutely agree. I think what this was originally referring to was the filtering of polarization states in a Stern-Gerlach apparatus described by Feynman in the Feynman Lectures on Physics, Vol. 3, Ch. 5 (the original citations were to Feynman) but I also find it incomprehensible. What is a "diffractive object"? Diffraction does not cause separation of wavefunction states as far as I know. And as you say even if this was comprehensible, the subject has little to do with wavefunction collapse. I support deletion of the section. --ChetvornoTALK 21:10, 4 December 2015 (UTC)
Pace the combined wisdom of the foregoing comments from esteemed experts, this article would not give a newcomer reader the faintest idea of the physical meaning of "collapse". To an expert reader, accustomed to reading quantum mechanical material, the article seems perfectly simple, and to make perfect sense. But the key physical facts involved are not mentioned, and are scarcely hinted at in the article. A newcomer reader in Wikipedia is entitled, I think, to some physical view of the topic. I recently offered an attempt to supply such a view. When writing it, I thought I had taken care to limit what I wrote to elementary statements of the obvious. But it was criticized as having too few references and, when I supplied more references, it was rejected as having "far too many references" and as "incomprehensible". My own view is that the Heisenberg account of reduction is unmysterious and physically obvious and adequate. My offered edit tried to unpack it into ordinary language. In a nutshell, what is observed depends on where the detector is placed, hardly mysterious.
But no, the unmysterious, physically obvious, and adequate is unacceptable, or uninteresting, to some quantum mechanics experts. To judge from this article, they accordingly seem to have followed David Bohm's doctrinal transmogrification of the topic, and made it into a quantum mechanical metaphysician's mysterian playground.
The lead of the article says "However, in this role, collapse is merely a black box for thermodynamically irreversible interaction with a classical environment." Though this grinds out some clichés, it is sheer metaphor, not physics. A little further on, the lead adds: "Nevertheless, it was debated, for if collapse were a fundamental physical phenomenon, rather than just the epiphenomenon of some other process, it would mean nature was fundamentally stochastic, i.e. nondeterministic, an undesirable property for a theory." This is metaphysical commentary, not physics. There is no attempt to unpack for the reader the simple physics that Heisenberg sketched. The next sentence of the lead has had a failed citation flag for the past year, with no response: "Decoherence explains the perception of wave function collapse in terms of interacting large- and small-scale quantum systems, and is commonly taught at the graduate level (e.g. the Cohen-Tannoudji textbook).[failed verificationsee discussion][1]" This sentence makes a bold physical claim of dubious validity, with no reference, and supports it with a claim about how it is taught, supported by a failed citation.
  1. ^ C. Cohen-Tannoudji (2006) [1973]. Quantum Mechanics (2 volumes). New York: Wiley. p. 22.
I guess that is enough for now. Are any editors concerned to see some physics in the article, or is the newcoming Wikipedia reader expected to be happy with raw mathematics and metaphysics?Chjoaygame (talk) 08:27, 8 December 2015 (UTC)

This article is a roll of near-meaningless metaphysical babble occasioned by a resolute refusal to look at the obvious and simple physics of quantum observation, as described by Heisenberg. In the light of a simple physical approach, there is no reason to rattle on as does the article.Chjoaygame (talk) 22:19, 12 February 2016 (UTC)

Add quote from Heisenberg's paper

I suggest we add, in the history section, the following short quote from the 1927 paper by Heisenberg introducing the "collapse of the wave packet":

The second determination of the position [of a spreading electron wave packet] selects a definite "q" [position] from the totality of possibilities and limits the options for all subsequent measurements. After the second position determination the results for later measurements can only be calculated when one again ascribes to the electron a "smaller" wave packet of extension λ (wavelength of the light used in the observation). Thus each position determination reduces the wave packet to its original extension λ. L0rents (talk) 17:03, 8 November 2018 (UTC)

I think your edit was positive (including the translation of the article title, which is nice) and I support putting it back in. Maybe the editor that undid it can comment? Waleswatcher (talk) 17:21, 8 November 2018 (UTC)

I am not objecting to the quote per se, but I would argue it be adduced as a support footnote where it was inserted before. The article goes through greatly unnecessary cycles of controversy and tendentiousness with chapter-and-verse dueling quotes, and I'd opt for moderating that... The word "Copenhagen" has become a rallying cry or a red flag... But the section is not a history of science forum. The quote as a footnote would be salutary. Cuzkatzimhut (talk) 22:31, 8 November 2018 (UTC)

make page: wave function modification (via a cascade of mixed states without any intermediate pure state)

wave function modification (or wave function alteration) vs wave function collapse

Methods/approaches

  1. acceptance of wave function modification and mixed state cascades
  2. having the same result due to statistical distribution of alternative event conclusions with the usage of wave function collapses (not modifications) and series of pure states (averaging out from virtual pure states, what actually happened; it leads to bad physics if we interpret the process as physical actuality; also we might need some biasing variables for corrections - that's merely a calculational approach to mimic reality but not to understand the natural causality)
    this is theoretically erroneous, because mixed states actually exist; but it may be helpful for some computer programs which test many alternative particle pathways (sometimes but not necessarily these programs run faster); also there might be some mathematical benefits using this approach under specific problems which don't want well with the acceptance of mixed states (in large objects and system we want to understand the behavior of it; not to track down correctly each particle but do deliver acceptable solutions) — Preceding unsigned comment added by 2A02:587:4110:7800:AC0C:14C4:8940:2AC1 (talk) 07:26, 10 August 2019 (UTC)

Decoherence does not explain collapse

I've reworked the lead to say that decoherence is not enough to explain wave function collapse. The lead was citing Schlosshauer to support this false statement, which is rather bizarre, because what he actually says is the opposite: Within this interpretive framework (and without presuming the collapse postulate) decoherence cannot solve the problem of outcomes: Phase coherence between macroscopically different pointer states is preserved in the state that includes the environment, and we can always enlarge the system so as to include (at least parts of) the environment. In other words, the superposition of different pointer positions still exists, coherence is only “delocalized into the larger system” (Kiefer and Joos, 1999, p. 5), that is, into the environment — or, as Joos and Zeh (1985, p. 224) put it, "the interference terms still exist, but they are not there" — and the process of decoherence could in principle always be reversed. Schlosshauer already said it better, but let me repeat it in a simpler language: decoherence only reduces a superposition to a mixture. It cannot select one of the elements of the mixture as the actual, which is the whole point of collapse: actually obtaining a single outcome. Tercer (talk) 16:33, 13 March 2020 (UTC)

Thanks, that is what has always confused me about statements that decoherence explains collapse. The Schrodinger equation is linear, so after the wavefunction of the apparatus is entangled with the environment all the terms in the apparatus superposition, representing different eigenstates of the apparatus will have corresponding terms in the environment's wavefunction. The interference terms have vanishing amplitude, so the states don't interact, giving the appearance of collapse within any state, but all the states are still present, implying a "Many Worlds" splitting of reality. --ChetvornoTALK 17:10, 13 March 2020 (UTC)
I'm glad you appreciate it. That's exactly what I think. I like to think of the measurement process as being two steps: first goes to , and then goes to (e.g.) . Decoherence does the first step. Now to do the second you need a collapse. Decoherence cannot do that, as that transformation is non-linear, and decoherence, as any unitary transformation, is linear. You can just do the first step and leave the second out, but that's Many-Worlds. Tercer (talk) 19:14, 13 March 2020 (UTC)
And if such a hypothetical "collapse" occurred after decoherence there would be no way to detect it, since the different branches of the wavefunction are noninteracting. There has never been any evidence of "wavefunction collapse", it has all been decoherence. I have to conclude that from a modern perspective the only reason collapse interpretations such as Objective collapse theory persist is that some physicists don't like the messiness of a world wavefunction that splits uncountably trillions of times per second. --ChetvornoTALK 22:38, 14 July 2020 (UTC)

Unitary Field Theory

Why is it being claimed that QFT in general is unitary? The algebra is Type III and thus there are no pure states. Even more fundamentally in general spacetimes translations aren't an isometry and thus time translations wouldn't be represented by a unitary (I can provide a nice paper explaining this if required). This article proceeds off of only basic non-relativistic QM with no superselection and according to one user anything else is "nonsense" and RQFT is simple unitary despite proofs to the contrary 109.78.221.253 (talk) 15:57, 10 December 2020 (UTC)

A quote from Frohlich's paper about this (which I was told was somehow not published and did not in fact say this): "As I have tried to explain, while, for an isolated system, the Heisenberg-picture time evolution of operators, in particular of physical quantities characteristic of such a system, determined by the unitary propagator of the system is perfectly adequate, the time evolution of its states is described by a novel kind of stochastic branching process with a “non-commutative state space" 109.78.221.253 (talk) 16:00, 10 December 2020 (UTC)

Because QFT is unitary. You simply misunderstood Yngvason's paper. It's true that there are no pure states in a type III algebra. What Yngavson doesn't say, because it is not true, is that the evolution is not unitary. As for Fröhlich's paper, it doesn't claim to be published (it says it will appear in "Frontiers in Analysis and Probability"), and it is about the non-relativistic case. It makes no sense to use to claim that QFT is not unitary. And what he claims to be this "stochastic branching process" is precisely the collapse of the wavefunction under a measurement. He is explicit that evolution otherwise is unitary, and therefore reversible. Tercer (talk) 16:22, 10 December 2020 (UTC)
QFT is perturbative at a unitary level where it can be approximated by Type I structures. However Frohlich's paper is published since "Frontiers in Analysis and Probability" has been published. Frohlich has a paper about the Relativistic case which I can link to, but for simplicity in that paper he is using the facts of the relativistic case in a non-rel setting. He explicitly says this mentioning Buchholz. Your final point shows that you did not understand the paper. See his relativistic paper for more detail:
https://arxiv.org/abs/1912.00726
He is not saying that the stochastic branching process is put in by hand and replicates measurement. He is saying that the operators continue to evolve in a unitary way but that the state does not, i.e. the Heisenberg and Schrodinger pictures are not equivalent in QFT. This is no surprise as it is in general the Schrodinger picture might not exist or require additional renormalizations. I suspect you don't know the C* formalism and are trying to understand the papers on the fly. It's not "collapse under measurement" it's branching due to the emergence of a non-trivial centre of the centralizer. Do you understand the mathematics in detail?
Regardless we know in QFT on curved spacetimes that since time translation is not an isometry of the classical spacetime that it cannot be unitary so I don't know why this is a surprise.
I don't know how you came to the conclusion that the evolution is reversible since he mentions at multiple points that it is not. I also never said Yngvson said the evolution was not unitary, he was just a simple reference for the Type III property109.78.221.253 (talk) 16:37, 10 December 2020 (UTC)
Frölich's paper is explicitly about non-relativistic quantum mechanics. He is not using any "relativistic fact", he is simply postulating that the state evolution is given by objective collapse of the quantum state, in equations (9)-(10). That he uses the formalism of C*-algebras is completely besides the point. It is a matter of language, not content. And yes, as I matter of fact I am familiar with the C* formalism. I actually took a course on AQFT with Yngvason himself. Not that it matters, because nobody's credentials can be checked anyway. What matters is what the references say. And neither Fröhlich's nor Yngvason's papers say anything close to the statement you added: Decoherence in quantum field theory is still not fully understood, but there even the combined system and environment seem to enter a mixture of classical alternatives. This is due to the Type III nature of the algebra of observables where there is in some sense a fundamental loss of information, an enhancement of there being no pure states in field theory. This is just complete nonsense. Tercer (talk) 19:33, 10 December 2020 (UTC)
The first paper I gave uses results from QFT in a non-relativistic setting. However I explicitly gave you the paper where the relativistic details behind this are discussed. He isn't "postulating" that the state evolution is given by the collapse postulate. If you look at his relativistic paper or the paper I linked with Blanchard this follows directly from the emergence of a centre of the centralizer. It is not postulated. Even ignoring Frohlich's papers we have the papers of Doplicher, Figliolini, and Guido 1984; Buchholz and Doplicher 1984; Borek 1985 where the global state of field theories are constructed and shown to be always mixed and not obeying a unitary evolution. And also as I have said in curved spacetimes time translation isn't an isometry so you wouldn't even expect it to be unitarily implemented. Again you're not actually reading the papers and just claiming it's postulated. Look at the relativistic paper or the older published paper with Blanchard. The Buchholz papers prove the algebra has a growing centre and thus we can't have unitary evolution. You keep saying it's "nonsense", if so what's wrong with Buchholz proof that the algebra has a non-trivial and growing centre? 51.37.153.179 (talk) 13:11, 11 December 2020 (UTC)
I have read Fröhlich's and Yngvason's papers, and saw that you just misunderstood them completely. I'm not going to waste my time going through more papers and trying to figure out what is it that you misunderstood. Your statement that QFT is not unitary is nonsense; I don't know what Buchholz did prove, but I'm sure it is not this. Do you have any passage from him actually stating that QFT is not unitary? Yeah, I thought so. It's just your misinterpretation. Similarly, does any of the references even address the question of decoherence in QFT, or is it again your speculation? Mind you, that falls in WP:OR. Tercer (talk) 13:37, 11 December 2020 (UTC)
I already gave a quote to that effect. And sorry but I did not misunderstand them. Frohlich directly says that the state does not evolve unitarily but via a stochastic branching process due to the algebra having a non-trivial centre of its centralizer. It's right there in the paper. That's why he discusses the centre of the centralizer/stabilizer and how the state becomes more mixed upon the emergence of new observables in the centre of the centralizer. Fine if you don't want to read other papers that explain the details more but it is clear to me that you don't understand them. If I'm wrong how is the states unitary evolution preserved in the presence of a non-trivial centre of the centralizer/stabilizer (i.e. double stabilizer). Buchholz proved it didn't, but you seem to say you know better. — Preceding unsigned comment added by 51.37.153.179 (talk) 13:44, 11 December 2020 (UTC)
So you have nothing to support your statement, got it. I'm removing it again. Tercer (talk) 14:25, 11 December 2020 (UTC)
No, as seen below you just can't read the papers because you don't have the prerequisite background in the material. I've explicitly backed up what I said and explained the relevant facts. You don't even address anything in your response. No surprise since you don't know the formalism well enough to distinguish a pure or vector state.51.37.153.179 (talk) 15:06, 11 December 2020 (UTC)
We shouldn't be using primary sources that advance the authors' own view of how to revise QFT or reinterpret quantum mechanics once again, like the "ETH Approach". (I'm happy to indulge in that in other places — plenty of time to waste in lockdown — but that's not what Wikipedia is for.) It's already enough work to cover the standard textbook material. XOR'easter (talk) 22:00, 10 December 2020 (UTC)
It's just being used as a summary on what is known in QFT. Other references are given to the Type III nature of QFT and the emergence of a centre to the algebra (this is proven in the Buchholz paper), Frohlich just sums this up. This is old knowledge. Some of Zurek's papers used here do the same but seem okay (i.e. summarise work and have some interpretive comments). In fact there is little difference between ETH and Quantum Darwinism as both are calculational frameworks to describe the emergence of conditionalisable (i.e. classical) facts. I don't see why Quantum Darwinism can be referenced but not ETH, especially when only being used as a summary.51.37.153.179 (talk) 13:12, 11 December 2020 (UTC)
Actually, taking Quantum Darwinism as definitive would be making the same error. XOR'easter (talk) 18:02, 11 December 2020 (UTC)

To separate this out before the chain goes too deep. If the algebra is of Type III with a non-trivial centralizer then the total state is necessarily mixed. If the algebra is Type III there are no pure states so how could saying "the combined system and environment seem to enter a mixture" be nonsense. They simply can't be pure since there are no pure states. What's the issue with this? If we have a centralizer then the decomposition over some observables is unique, thus for certain observables the mixture is over classical facts. This was proven by Buchholz in the referenced papers. Is it being claimed here that the total state is actual pure? How could that be if in QFT all states are mixed due to the Type III property of the algebra? I really don't understand how the total state being mixed is nonsense, it follows straight from the algebra being Type III 51.37.153.179 (talk) 13:33, 11 December 2020 (UTC)

This is just an irrelevant technicality. If the state is not pure to begin with, how can it "enter a mixture"? And I should remind you that all states in a type III algebra are vector states, so one could as well argue that all states are pure states. The point is just that for all states , there are states and , and a vector , such that for all A in the type III algebra we have . This has nothing to do with the evolution being unitary or not; it is unitary if for unitary . This is true, and this is what matters, as it makes the theory reversible. Tercer (talk) 14:24, 11 December 2020 (UTC)
There's a lot of errors here. First of all there is a difference between a state being pure and a state being a vector state. Just because a state is given by a vector state in a particular representation does not mean it is pure. That's the whole purpose of using Tomita-Takesaki modular theory to compute the entropy and why you must do so in general. One can take a regular mixed state in non-relativistic quantum theory represented by some statistical operator/density matrix and use the GNS construction to form a representation where it is a vector state. However this does not make it pure. It's the exact same state. If you used the mere fact if being a vector state to argue whether something was pure or not you'd enter into a contradiction as in some representations it would have zero entropy and in others in would have non-zero entropy. It's a basic aspect of the C*-algebra formalism that pure states and vector states are not the same. The former is a classification of the actual state, the later is a property of a state in a particular representation. As I mentioned above regular mixed states can be recast as vector states in other GNS reps. This is covered in Bratteli and Robinson. The set of abstract states is not convex and thus no state is pure. You cannot argue the state is actual pure because it has a non-unique representation as a vector state in a particular rep. To get something so basic wrong means you simply don't know the C*-algebra formalism so why argue about it? See this introduction for a simple account: https://arxiv.org/abs/1909.06232. And maybe don't argue about the algebraic formalism until you know it.
Secondly you're ignoring they fact that such a ket is highly redundant. For Type III algebras several vectors within a rep represent the same abstract state (i.e. the ket associated to a state is non-unique). The abstract state is completely mixed as shown when you compute the entropy in a representation independent way. It's daft to say "you could argue it is pure".
Lastly it is the fact that the von Neumann algebra possesses a non-trivial double centralizer that makes the evolution non-unitary. Again I don't see what's so crazy about this as you're going to lose unitarity of time translations in a general spacetime anyway since time translation is no longer an isometry. 51.37.153.179 (talk) 15:03, 11 December 2020 (UTC)
Frankly, this is getting exhausting. I didn't say the state is pure, because I have explicitly shown it is mixed. I said "you could argue it is pure", because it can be represented as a vector state. This shows that the intuitions from finite dimensional algebras are misleading here, because the important thing from non-unitary evolution there is that it takes a pure state, that can be represented as a vector, to a mixed state, that can't be represented as a vector. This is emphatically not the case in type III algebras: you can represent them as vectors. The fact that they are mixed matters fuck all for unitary. That you are obsessing about this point only shows you have no idea what you're talking about. Tercer (talk) 15:23, 11 December 2020 (UTC)
Tiring because you are making basic errors and trying to learn the subject on the fly I expect. You can't argue that they are pure because they literally are not pure. They are not extremal points in the set of states because the set of states has no extremal points, thus there is no argument for them being pure. Again in the next section you make a basic error which I even gave you a reference for where you say "to a mixed state, which can't be represented as a vector". This is utterly wrong because all states, even all mixed states can be represented as a vector via their GNS construction. Every state is a vector state in its GNS rep. That's a basic theorem of von Neumann algebras, so this talk of "can't be represented as a vector state" is completely wrong.
Instead we move to a representation independent picture and see that the evolution of algebraic states in QFT is strictly entropy increasing, since the algebraic centralizer grows over time. Thus you cannot reverse it. This is proven in Buchholz (and other's) papers. Buchholz explicitly shows that the algebra at time t and the commutant at an later time have an infinite dimensional intersection and so the centre has enlarged in an irreversible way (you can't re-enlarge the algebra). Basic confusions about mixed states and representations doesn't change this.51.37.153.179 (talk) 16:22, 11 December 2020 (UTC)

Buchholz proved the algebra shrinks over time. Commutant at later times and algebra at earlier times have a infinite-dimensional intersection. This cannot be reversed, it's impossible for any automorphism to do this. The user above is making basic errors in the algebraic formalism, such as not being aware that some states cannot be argued to be pure (not extremal) and thinking some states have no vector rep. Describe an evolution that could reverse this growing of the centre or accept you don't understand algebraic field theory. 51.37.153.179 (talk) 16:26, 11 December 2020 (UTC)

Did you even read what I wrote? I said that in finite dimensional algebras you can't represent mixed states as vectors, and that this is emphatically not the case for type III algebras. Being disingenuous will get you nowhere. Tercer (talk) 17:29, 11 December 2020 (UTC)
In finite dimensional algebras you can represent mixed states as vectors via their GNS rep. So another basic error, nothing "disingenuous" about it. Once again:
All states in QFT are mixed so the total environmental state is mixed, so it is not incorrect to describe it as mixed. One cannot "argue that it is pure" since it simply isn't. It's not extremal and has a finite entropy.
Buchholz has proven (and Borek in another context) that the algebra's centre grows and thus the evolution cannot be reversed since no automorphism can alter the centre (simple fact as automorphisms preserve the centre) 51.37.153.179 (talk) 17:57, 11 December 2020 (UTC)
You're still being disingenuous. It's an extremely basic result that you can't represent mixed states as vectors in finite dimensional algebras, which I'm sure you're familiar with. What the GNS construction does to obtain a vector representation is to increase the dimension of the algebra, which is another basic result: you can always purify mixed states using a higher-dimensional Hilbert space. Tercer (talk) 20:15, 11 December 2020 (UTC)
You can represent the mixed states as vectors of course you can. The GNS construction doesn't increase the dimension of the algebra, it's a completely separate thing to purification. I'm just surprised you keep talking about stuff from von Neumann algebras when you clearly don't understand them. The GNS construction represents the mixed state as a vector state by using a direct sum representation quasi-equivalent to its original one. This vector state is not however the same thing as the one obtained from purification. See Haag's "Local Quantum Physics" for more details. You're confusing moving to a reducible representation with purification. Even the Wiki article https://en.wikipedia.org/wiki/State_(functional_analysis) mentions that any state on any algebra can be a vector state — Preceding unsigned comment added by 109.76.85.27 (talk) 18:02, 12 December 2020 (UTC)
In finite-dimension-land, nothing stands in the way of a mixed state evolving unitarily: . The claim that having no pure states implies non-unitarity is at best a red herring. If the actual basis for the claim of non-unitarity is the algebraic centralizer grows over time, then that's the statement which should have been made in the first place (and an enhancement of there being no pure states in field theory was an excessively vague way of saying so — what does enhancement mean?). And if you're curving the spacetime background, you're talking about a different theory. XOR'easter (talk) 18:02, 11 December 2020 (UTC)
Moreover, as a matter of article organization and Wikipedia style, this text is not a good edit. The introduction is supposed to summarize what the rest of the article says, and the rest of the article says almost nothing about decoherence and even less about decoherence in QFT. XOR'easter (talk) 18:08, 11 December 2020 (UTC)
My claim was never that mixed states implies the evolution is non-unitary. One has to distinguish between a mixed state and a mixed state over classical alternatives, i.e. a factor mixed state and a non-factor mixed state. So:
First of all states in QFT are mixed states. This does not show non-unitary evolution. However Buchholz and Borek have shown that in addition to this since the algebra has a growing centre that the mixed states become mixed states over classical facts, i.e. non-factor mixed states.
I tried to phrase this as "become mixed states over classical facts" originally and then changed it to a "loss of information" since both ways of phrasing it are found in the literature. I accept that there are probably better ways of phrasing it. But there is fundamental information loss/growing centre/non-factor mixed states and that this is more/goes further than/is an enhancement of the basic fact of all states being factor mixed. The enhancement is from factor mixed to non-factor mixed (also I'm not curving the spacetime, that was just an example of another context where we don't have unitarity)
I fully accept phrasing things badly but the fault isn't purely on my end here, there is an element of arguing against me despite not knowing the mathematics well enough and constantly calling things found in the literature "nonsense". Instead of the discussion being about how to phrase the factor mixed vs non-factor mixed issue it turned into a debate on these proven features even being present. I tried to phrase the non-factor mixed aspect as "a mixture of classical alternatives" in my first edit.
And yes I see that the introduction is a poor place to put it, however the original intro did already mention decoherence in the context of what non-rel QM decoherence shows. I added that QFT deocherence doesn't show the same. So both should go then. 51.37.153.179 (talk) 18:28, 11 December 2020 (UTC)

To be more neutral then, what is a non-"excessively vague" way of stating the difference between factor and non-factor mixed states and where should it go in the article as it is highly relevant to the issue of collapse

How about: "It has been shown that in quantum field theory that state does not evolve in a unitary manner in general. In QFT the state is in general states are mixed but in the presence of massless modes they also become what is called "non-factor mixed". This means that the theory possesses an increasing number of classical observables and the state factorises over them in a unique way (in contrast to the non-unique factorisation of general mixed states) evolving into a form equivalent to a classical probability distribution over alternatives for these particular observables" 51.37.153.179 (talk) 18:49, 11 December 2020 (UTC)

I would like to note that Wikipedia is not a textbook, it does not exist to replace a traditional education, and secondary sources are greatly preferred. In my opinion, none of this should be included, regardless of its technical correctness, as it is not represented in secondary sources, it is obviously not generally agreed upon, and it is all unnecessary detail. If there is little chance a physics student would learn these details prior to beginning the research portion of their PhD, then it has no place in a general WP article on the subject, which exists to provide an overview that is readable and verifiable by the general public. Just my 2 cents. Footlessmouse (talk) 20:21, 11 December 2020 (UTC)
But the vast majority of the material in this article is not in secondary sources if I have your use of the term correct. Also absence from secondary sources does not really imply something is not agreed upon. It might just be too advanced. However that goes for the majority of this article's content. Otherwise it would just be a short explanation of the projection postulate. A huge amount of physics articles on wiki don't meet your standards. 51.37.153.179 (talk) 20:47, 11 December 2020 (UTC)
Just a quick reply, I don't plan on watching this page and will not say any more. Not my standards, that is what guidelines say. I know that many of our physics articles are in bad shape, not a reason to make things worse. I said it is obviously not generally agreed upon because you are in the middle of debating these points with other physicists who do not agree with you. If there is a great deal of content in the article that is not represented by secondary sources, it should be removed. Your claim that there would be no information in the article if we stuck to textbooks is ludicrous. Footlessmouse (talk) 20:59, 11 December 2020 (UTC)
This is quite odd. Does everybody here jump to a massive hyperbole in discussion. I wasn't saying there'd be no information. I said that if you wanted to pair it down to what was only in secondary sources and keeping to what you mentioned it should be just a brief explanation of the projection postulate. That might be better than the current article. Where did I claim there'd be no information? Also again you can't use disagreement here to demonstrate actual disagreement as a topic of physics. There are no articles or research papers disputing what I referenced. That's basically just saying if a Wikieditor argues something then it is in actual dispute in academia. 109.76.85.27 (talk) 21:58, 11 December 2020 (UTC)

Well I edited the article to remove decoherence from the introduction and keep it in its own section. All I put in was that it was an open question whether the environmental state was reversible and referenced Peres and Omnes's discussions of this which are widely cited. Apparently this (and removing that collapse is a black box) are incorrect. And in fact that Omnes and Peres don't discuss decoherence when they in fact have chapters on it and the fundamental nature of reversibility. Peres in particular is a highly cited text in this regard. And yet this is "original research". Considering that the editor Tercer has made basic errors in the relevant formalism (thinking mixed states cannot be repped as vector states when in fact even the Wikipedia page on algebraic states states otherwise) can I at least not be accused of the being the only one engaged in an edit war. Saying my references don't discuss decoherence is explicitly wrong, they do. They even have whole chapters on it. 109.76.85.27 (talk) 22:53, 12 December 2020 (UTC)

Okay, one more reply since you made those edits. How is this a thing: "Collapse is one of two processes by which quantum systems evolve in time"? I think some of your edits read well, but I defiantly object to your changes of the lead, it actually made sense before. Edit previous: Primary sources can be used sometimes, but they should generally not be treat their findings or philosophizing as facts, they can be mentioned with qualifications. It also doesn't look like there is that much here, that isn't common knowledge, that is purely based in primary sources, but I didn't look too deep. Footlessmouse (talk) 05:50, 13 December 2020 (UTC)
First of all "Collapse is one of two processes by which quantum systems evolve in time" is a sentence the intro originally had, I didn't insert it. I was thinking about how to rewrite it because it isn't really valid. I've now changed the "systems evolve in time" to it being how the "state is updated". There's probably a better way of phrasing this but I think focusing on the state over the system is better
The intro might have "made sense" before but it was incorrect particularly the comments about being a black box and the odd discussion of position and momentum and the overly definitive statements about decoherence. Above it was mentioned that decoherence suited being in the main body of the text so I moved it there. As for what I added about decoherence, I don't see under what definition of philosophizing that Buchholz and Peres count as treating philosophising as facts. Buchholz is almost completely mathematical for example. One of my problems with the original article is that it was too dated and philosophical, that's why I inserted things like Belavkin's mathematical work on collapse. There's certainly plenty to still do. For example Collapse has actually been used as a heat engine in recent work in Grenoble.
Other things to be changed is that the article discusses wavefunctions too much, i.e. position or momentum basis pure states. Even the mathematical discussion is based around that. Where as it could be simplified by just discussing an abstract ket.
Could you be more specific about what aspects of the lead were better before? Thanks for taking the time regardless. 64.43.31.141 (talk) 10:48, 13 December 2020 (UTC)

Unitary evolution

Repeatedly re-inserting disputed content gets you nowhere. If you want your edits to be accepted you need to actually demonstrate that it is supported by reliable sources. And you have to give the specific passage, not tell us to read a whole paper or a whole book. I have already wasted my time reading two papers you cited, only to find out that they do not support your claims at all. I am not going to do that again.

The fundamental difficulty with finding a reliable source saying that the fundamental evolution is not unitary is that this is not true. The only case where evolution is suspected to be non-unitary is black hole evaporation, and this is considered to be one of the biggest problems in modern physics, precisely because it is believed that even then evolution must be unitary. And mind you, this is QFT on a curved spacetime. Your claim that evolution is non-unitary already on non-relativist quantum mechanics, or QFT on a flat background, is just ridiculous. Tercer (talk) 13:32, 13 December 2020 (UTC)

They directly say what I am saying, the problem is though that you don't understand the material. You've said silly things like it could be argued that states are pure just because they have vector reps (impossible since they are non-extremal) and then said density matrices in finite dimensions don't have vector reps when in fact they do via their GNS rep, which wikipedia itself says in its article on states in C*-algebras. You then confused this GNS vector rep with purification which is a completely separate thing. I have pointed to the relevant chapters. Chapter 8 of Omnes and Chapter 11 of Peres where they give a long discussion of why the environment cannot be reversed in general. If you haven't read these standard sources maybe you shouldn't be editing Wikipedia pages on their topics rather than asking me to give a curated reading of them.
Here's Peres on p.370:
"In that case, it is justified to say that the evolution which starts from the state ψc is irreversible" The state he is mentioning here is the total state of system + environment
So he does say the evolution is irreversible. I'm sure this is wrong somehow considering you are so "familiar" with Peres. — Preceding unsigned comment added by 64.43.31.141 (talk) 16:12, 13 December 2020 (UTC)
Here's Omnes on p.309:
1. It is impossible to circumvent decoherence when it takes place in a sufficiently big object.
2. This is not only a practical matter. The principles forbidding it are not those of quantum mechanics properly speaking but of relativity, not even mentioning the finiteness of the universe
So you can't just directly claim in the introduction that the total state is definitively reversible and pure obeying the Schrodinger equation. Especially in QFT where due to the absence of pure states and ever present entanglement it can't be obeying the Schrodinger Equation.
In addition you've said things like the references don't discuss decoherence, despite Omnes having a chapter literally called "Decoherence". It's Chapter 7. However I see that on another page you are literally disputing something fairly basic with somebody as knowledgeable as John Baez, so I've little hope I'll win this. 64.43.31.141 (talk) 16:05, 13 December 2020 (UTC)
I also want to add that this is quite odd because my current wording is neutral. I'm just saying there is no consensus about the evolution being unitary and reversible. This is even the case with black hole information loss where for example Hawking, Unruh and Wald (and even Peres) have all expressed the opinion at points that the evolution was not or may not be unitary and reversible. By just deleting all references to this Tercer is simple enforcing his own opinion. Similar to what he is doing on the Superdeterminism article. 64.43.31.141 (talk) 16:29, 13 December 2020 (UTC)
No, that section of Peres is talking about how exactly reversing a unitary evolution is likely to be beyond the experimenter's capabilities. "...if we want to prepare a state that will become, after 105 steps, the coherent state , we must control the value of with an accuracy better than the limit given by Eq. (11.57). If we are unable to achieve such an accuracy, we have every chance of landing far from our target—in a state which will be nearly orthogonal to ." Chapter 11 of Peres is his introduction to quantum chaos, which treats environmental effects as perturbations in the system's Hamiltonian; the term "decoherence" does not appear in his index, and system-environment entanglement gets two sentences after Eq. (11.68). Peres says a lot of interesting things, but they're orthogonal to the discussion here. XOR'easter (talk) 18:26, 13 December 2020 (UTC)
I disagree, it continues into Chapter 12 where he discusses the in principle non-existence of superobservers based on this. The non-existence of superobservers is equivalent to irreversibility. He talks about it in similar terms to Omnes in terms of the absence of certain observables in 12-1.
"Therefore irreversibility is a necessary feature of quantum tests. There are no super-observers in our physical world"
"Reversing the unitary evolution (12.3),for example resurrecting Schrödinger’s cat, is an impossible task"
Although I must say I don't understand the difference between an evolution that is beyond the ability of anyone to reverse and an irreversible evolution. He also references Schwinger and Englert, so would it be worth referencing Englert's papers on how the reversal Hamiltonian is unphysical in most cases?
Anyway the point is that there is disagreement, even for the black hole information paradox. Singly out Peres in particular (and incorrectly I think) is just dodging the point which is that since there is disagreement the article cannot just state the evolution is reversible as a matter of fact.
64.43.31.141 (talk) 19:24, 13 December 2020 (UTC)
To understand what Peres means by "irreversibility", you have to start with how he rejects the idea of a wave function for the universe. He has both unitarity of time evolution and the absence of super-observers because he never tries to fit all of creation into a single state vector. ("In this book, I shall refrain from using concepts that I do not understand.")
Omnès takes the fundamental time evolutions to be unitary (p. 41). The "impossible to circumvent decoherence" bit is in a discussion about John Bell's argument that decoherence can answer questions in practice but not in principle. Omnès summarizes: "When one starts from a pure state, it will always remain a pure state as long as it evolves according to the Schrödinger equation. Accordingly, even if the reduced density operator describing the collective observables becomes diagonalized, this is unessential: the full density operator still represents a pure state and therefore it contains the possibility of showing interference. True enough, no interference can be seen in a measurement of a collective observable, but there always exist more subtle observables that would be able to show it." Omnès then argues that these "more subtle observables" would not be physically implementable. An interesting argument, to be sure, but not an assertion that the evolution of the "total state" must include non-unitarity somewhere. Omnès does discuss non-unitarity of the GRW sort (pp. 349–350), but he does not find it a very useful notion. Also, for Omnès, unitary evolution of a density operator by is a form of the Schrödinger equation (p. 316).
The claim that Peres and Omnès were being cited to support was that Reversible evolution is equivalent to the algebra of observables being all bounded operators on the Hilbert space. Where do either of them say this? XOR'easter (talk) 19:54, 13 December 2020 (UTC)
If you can't measure all observables, i.e. if some observables are unphysical, then you can't reverse the evolution (although Peres and Omnes weren't being used particularly to support this). This is an old theorem but a recent short derivation is given here: https://arxiv.org/abs/2009.07450. However it's just a basic theorem in C*-algebras, again Peres and Omnes weren't particularly intended for this.
Reversible unitary evolution and the possibility of interference observables are equivalent. Thus if a given set of interference experiments cannot be performed, you cannot undo/reverse decoherence at the global level. I'll more concentrate on the reversibility as "unitary" can be a bit ambiguous when it comes to Type III algebras since the evolution can be irreversible and entropy increasing but you can find a rep where it is described in the forward direction by a unitary operator. However it will be one whose inverse is not a physical operation. This is essentially what Omnès derives. Since if the inverse evolution does not exist then the time evolution operator is not unitary.
Omnes is more explicit in his highly cited "Consistent interpretations of quantum mechanics" on p.356:
The reverse process is mathematically meaningful:
One can consider the formal time-reversed density operator that is obtained from p(t) as being an initial state and let the Schrodinger equation act on it. The outcome will be the initial state p(0) after a time t.
It is worth noticing, however, that it is usually impossible to prepare this time-reversed density operator, as a matter of principle.
Note: Principle. Usually unitarity in QM means that the evolution is purity preserving and reversible. Omnes uses an operator that is formally unitary (working in a non-rel QM Hilbert space there is always such a rep for the time evolution) but points out above that the inverse is non-physical. See Englert cited below for a stronger statement. 64.43.31.141 (talk) 21:17, 13 December 2020 (UTC)
I'll ignore your insults and address the content: as XOR'easter pointed out, Peres is saying that the evolution of quantum states is in practice irreversible in quantum chaos. Omnès is talking about the difficulty of performing an interference measurement on a system that is in a macroscopical superposition. And his whole calculation is based on the assumption that the system is in a pure state and the evolution is unitary (and thus reversible). So no, neither reference is remotely supporting your statement.
As for black hole evaporation, the consensus is that it is unitary, even Hawking himself agreed with that at the end. If you want to note that it might not be unitary, that's fine, this is a valid minority opinion. But you need to specify that this is what you're talking about, and not give the impression that there's any doubt about unitarity in non-relativist quantum mechanics or QFT. Tercer (talk) 19:26, 13 December 2020 (UTC)
Omnès says it is impossible to perform an interference experiment, not just that it is difficult. As I mentioned above the impossibility of some interference observables is equivalent to irreversibility. 64.43.31.141 (talk) 21:17, 13 December 2020 (UTC)
No it's not. The theory is unitary and reversible. This is one thing. You cannot reverse it. This is a completely different thing. It doesn't matter whether it is merely difficult or impossible for you to reverse it, it doesn't change the fact that the theory is reversible. This is a mathematical statement about the fact that the state at a given time together with the Hamiltonian completely determines the state at other times.Tercer (talk) 21:29, 13 December 2020 (UTC)
Omnes point is that it physically cannot be reversed, not just that "I" cannot reverse it. Of course it matters if it is impossible to reverse it. If it is fundamentally physically impossible to reverse it then the evolution is irreversible. All you're saying is that there exists an operator that is the inverse of the time evolution. If this operator is unphysical (Englert for example mentions it involves unbounded energies) then it doesn't matter if it exists. What you are doing is like saying there are time machines or that Godel's universe does exist simply because it exists mathematically as a solution to the Field Equations. 64.43.31.141 (talk) 21:38, 13 December 2020 (UTC)
Yes, it does matter whether the inverse exists mathematically. Your proposed text says that There is still discussion as to whether the combined system does in fact remain pure — the von Neumann entropy of the joint state doesn't actually go up if the Hamiltonian of the time-reversed process is expensive to engineer. Likewise, considering the phrasing reversible evolution even at the theoretical level, this presents a moving target as to what the theoretical level actually means. Surely a reasonable reader could infer that the theoretical level is the same as mathematical existence. XOR'easter (talk) 22:02, 13 December 2020 (UTC)
No it doesn't. The phrasing might need to be changed but it isn't that the Hamiltonian is "expensive to engineer" but that it is unphysical. That's what both Omnes and Englert (and Buchholz in QFT when he derives that the state is non-factor Type III) are showing. If the reverse evolution is unphysical, then the entropy does go up since as I described above, if it is irreversible then there is an absence of interference observables and thus its not a factor rep and so the Tomita-Takesaki method will show the entropy increase. I think you are arguing about this as if vector state = pure state. 64.43.31.141 (talk) 22:13, 13 December 2020 (UTC)
I'm not sure where you get that idea, but I don't think shifting the topic between non-relativistic QM (as in Omnès and Englert) to algebraic QFT is making your argument any more clear. And you seem to be relying on at least two different definitions of "unphysical". XOR'easter (talk) 22:25, 13 December 2020 (UTC)
Because if the reverse Hamiltonian is unphysical then the total state is not pure even though it is vector and thus the evolution (even if represented by a unitary) will increase the entropy. My original argument and the original thing I tried to include was purely based on non-perturbative QFT, but I got the feeling nobody here actually understood the formalism and this would make arguing difficult. So I included stuff from non-relativistic QM and even though I have Englert and Omnes saying that the evolution is in principle irreversible somehow this still isn't enough. Also nothing in my argument above was algebraic QFT it just used the algebraic formalism on non-relativistic QM which one can easily do (see Chapter 11 of Hannabuss). You said the evolution of the joint state still left the total state pure even if the reverse is difficult to engineer. I pointed out that Omnes is saying that the reverse evolution is impossible in principle not just hard to engineer. I just used the algebraic formalism as the quickest way to demonstrate this implies increased entropy and non-purity. 64.43.31.141 (talk) 22:39, 13 December 2020 (UTC)

Perhaps the clearest way to present this is the following article by Englert: https://arxiv.org/abs/1308.5290 See p.15:
"No, quantum evolution is not reversible"
Section 5.2 giving the argument. It takes place in non-relativistic QM and is a well cited article. Thus it's not the case that there is no doubt about reversibility in non-relativistic QM 64.43.31.141 (talk) 21:17, 13 December 2020 (UTC)

Yes, I've read that paper. I'm pretty sure I've cited it somewhere on Wikipedia before, actually. XOR'easter (talk) 22:02, 13 December 2020 (UTC)
So there is doubt as to whether the evolution is reversible. 64.43.31.141 (talk) 22:13, 13 December 2020 (UTC)

What am I actually being asked to demonstrate at this point? Buchholz and Borek both show cases in QFT where the state is non-factor mixed so the evolution is fundamentally irreversible. The state being non-factor mixed is equivalent to being mixed over classical facts and also is equivalent to a growing double centralizer which is equivalent to being irreversible, so these are all identical statements. In the non-relativistic case I have at least two people Englert and Omnes saying it is in principle irreversible. Surely this is enough to say we cannot just state as a fact that the total evolution is unitary and the total state is pure. I'm not sure at this point if I'm just being asked to phrase things better or are people still saying it's a fact that the total evolution is reversible and entropy preserving. 64.43.31.141 (talk) 22:49, 13 December 2020 (UTC)

Englert and Omnès evolve pure states to pure states. Eq. (35) in section 5.2 of Englert, the passage which you say contains the clearest way to present your argument, is a Schrödinger equation for the time evolution of a pure state . The rest of the section is about Hamiltonian evolution. I do not see how to start with a scenario where the time evolution is unitary by definition and then argue that it is not. Omnès says of his counterargument to Bell, "It cannot yet be considered as an indisputable proof", calling it "quite suggestive" rather than conclusive (p. 487). It's a statement about practicality which becomes a statement about principle if extra assumptions are invoked. And if we go with Englert's take on quantum theory, then all quantum states whether pure or mixed are catalogues of statistical information — epistemic rather than ontic. The "collapse" of a wave function is, to him, the bookkeeping device for updating the description in light of new information. If that is the interpretation one takes, then most of what's been debated on this page — and indeed much of the article as it stands — is tangential. XOR'easter (talk) 01:44, 14 December 2020 (UTC)