Talk:Princeton Engineering Anomalies Research Lab/Archive 4
This is an archive of past discussions about Princeton Engineering Anomalies Research Lab. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | Archive 4 |
The "Daily Free Press" student article does not establish PEAR as a "pseudoscience" laboratory
The article in dispute:
So this Daily Free Press article was written by a bunch of college students at Boston University who by all accounts just didn't know what the fuck they were talking about.
According to the article:
- But critics said experiments like [these] do not prove much.
That's an interesting assertion. Why should we think this is true?
- “This is just one of many examples where I think wishful thinking causes too much cognitive distortion in research,” said science author and historian Michael Shermer. “I find most of their research to be below the standards of rigorous science,” he said. “[Those who practice ‘pseudoscience’] make claims to scientific credibility that, in fact, lack evidence or fail to employ the methods of science,” Shermer said.
Great. Now why the fuck should any of us scientists give even half a shit about what Michael Shermer BELIEVES IN HIS HEART about PEAR? Is there any evidence that Shermer even has half a clue about anything PEAR did? I want to see a well-fucking-reasoned argument about why PEAR was flawed--citing a single college professor is not that at all. Shermer is a smart guy. He's an outspoken atheist. He's clearly spent a lot of time debunking pseudoscientists. But I'm an outspoken evil-fucking-atheist too. That isn't evidence we can just take the man's word for it. WHERE IS THE GODDAMN EVIDENCE? WHERE IS THE RESEARCH?
The article unfortunately continues:
- “That does not stop people from publishing about it for an uncritical or uneducated audience,” Allchin said in an email.
What the fuck? Jahn and PEAR published plenty of their studies in the IEEE--the Institute of Electrical and Electronics Engineers, "the world's largest association for the advancement of technology." This is an "uncritical or uneducated audience"!? Where is the actual EVIDENCE disputing any of these pro-ESP studies?!
They go on:
- Allchin said, however, some types of research fall into a gray area between credible and non-credible, which he defined as marginal science. Although the mainstream science community at first dismisses these areas of study because they seem far-fetched, he said they often eventually present scientifically provable results.
- These types of research, he said, do not strictly follow the scientific method, but are still worthy of note.
- “[Marginal science] is a genuine effort to clarify the fringes of our experience,” he said.
PEAR absolutely DID follow the scientific method, but as the article itself admits, there is every reason to believe PEAR and other research programs like it can and will continue to "present scientifically provable results."
That a couple of student journalists mistakenly refer to PEAR as pseudoscience is not evidence of anything at all other than that college students have still much to learn about life.
Neuroscience325 (talk) 12:19, 30 May 2015 (UTC)
- attempting to place the white labcoat of science over cockamamie woo is what pseudoscience is and is what PEAR was doing. -- TRPoD aka The Red Pen of Doom 17:39, 3 June 2015 (UTC)
HAHAHAHAHAHAHAHA. Are you serious, bro? According to whom?
Nice name-calling by the way. You really are academic with your informed criticisms. HAHAHAHAHAHAHA.
I can't tell whether you're some sort of intelligence (more like unintelligent) agent, intentionally using this encyclopedia to spread disinformation as a sort of psychotronic warfare or you're just plain stupid. Probably a bit of both. Or a lot of both. Why do you care what the PEAR article says if it's all been so clearly "thoroughly discredited"? Do you know who Hal Puthoff is? Be careful: either answer won't bode well for your future.
Regardless, I've given plenty of great sources and criticisms for whatever future, budding entrepreneurs and editors would like to come along to investigate.
See you in the 22nd century, retard. Have fun being poor. That is, having no money and being spiritually inept. I'm done now. Peace the fuck out. ✌ PEAR says, I control your brain. I say, they're right.
Neuroscience325 (talk) 14:22, 25 June 2015 (UTC)
Non-neutral article
Most references in this article stem from one source; "Skeptic's Dictionary" i.e. Robert Todd Carroll. It is therefore not a neutrally sourced article. 86.93.208.34 (talk) 07:16, 8 November 2018 (UTC)
Problems with "The Skeptic's Dictionary" as a source
So here's the list of a good number of PEAR's publications. I count 62...
The source in dispute: "Princeton Engineering Anomalies Research (PEAR)"
Now when I first ran across this source, I thought it had a fairly convincing argument too. This was several months ago. But after any further investigation than a mere cursory glance, you realize they've basically just ignored all the data collected and the history of the field and are just saying things that either make no sense or completely distort the facts of the situation.
For example:
- Physicist Bob Park reports, for example, that he suggested to Jahn two types of experiments that would have bypassed the main criticisms aimed at PEAR. Why not do a double-blind experiment? asked Park. Have a second RNG determine the task of the operator and do not let this determination be known to the one recording the results. This could have eliminated the charge of experimenter bias. Another experiment, however, could have eliminated most criticism. Park suggested that PEAR have operators try to use their minds to move a "state-of-the-art microbalance" (Park 2008, 138-139). A microbalance can make precise measurements on the order of a millionth of a gram. One doesn't need to be clairvoyant to figure out why this suggestion was never heeded.
PEAR closed in 2007. And to dispute its results--which were conducted on a variety of REG devices, all of which had proper controls...note that they cite absolutely zero evidence for the supposed charge of "experimenter bias," which is completely unfounded once you read up on how the experiments were actually done...all the data was automatically added into a massive database, so no researcher or operator had the opportunity to manipulate it--they reprimand Dr. Jahn for not performing a procedure...first outlined in 2008!!! "Park suggested that PEAR have operators try to use their minds to move a state-of-the-art microbalance (Park 2008, 138-139)."
WOW. It's like the person writing the article was actually retarded. But this is supposedly cited as evidence that:
- Perhaps the most disconcerting thing about PEAR is the fact that suggestions by critics that should have been considered were routinely ignored.
PEAR literally had to beg academic journals to let them publish their studies because parapsychology is such a taboo. Note the two NY Times articles cited as well as the Nature one:
- the lab’s results have been studiously ignored by the wider community. Apart from a couple of early reviews (R. G. Jahn Proc. IEEE 70, 136–170; 1982 and R. G. Jahn and B. J. Dunne Found. Phys. 16, 721–772; 1986), Jahn’s papers were rejected from mainstream journals. Jahn believes he was unfairly judged because of the questions he asked, not because of methodological flaws.
- http://www.boundarylab.org/bi/articles/Nature_PEAR_closing.pdf
That the article does not elaborate on this key fact in understanding what PEAR actually did essentially constitutes lying--manipulating the facts to give the reader an impression not supported by the actual historical data.
Now, the article goes on to accuse these respected scientific researchers of academic fraud without evidence and contrary to 50 years of historical data--it's completely inexcusable and retarded:
- Furthermore, Stanley Jeffers, a physicist at York University, Ontario, has repeated the Jahn experiments but with chance results (Alcock 2003: 135-152). (See "Physics and Claims for Anomalous Effects Related to Consciousness" in Alcock et al. 2003. Abstract.) And Jahn et al. failed to replicate the PEAR results in experiments done in Germany (See "Mind/Machine Interaction Consortium: PortREG Replication Experiments," Journal of Scientific Exploration, Vol. 14, No. 4, pp. 499–555, 2000).
TWO STUDIES. They cite TWO STUDIES to "debunk" 60+ studies published online on PEAR's website--which including the one they cite by Jahn et. al!!!--and this says nothing at all of the Parapsychology research at SRI that PEAR was only trying to replicate.
We should expect to get studies with both positive and negative results, regardless of what's actually going on!!! That's simply how the research works--if there was something wrong with PEAR's data, they need to explain what that something was and give an alternative hypothesis to the ESP hypothesis advocated by PEAR. But they don't even pretend to do this.
They try to justify themselves:
- According to Ray Hyman, "the percentage of hits in the intended direction was only 50.02%" in the PEAR studies (Hyman 1989: 152). And one "operator" (the term used to describe the subjects in these studies) was responsible for 23% of the total data base. Her hit rate was 50.05%. Take out this operator and the hit rate becomes 50.01%. According to John McCrone, "Operator 10," believed to be a PEAR staff member, "has been involved in 15% of the 14 million trials, yet contributed to a full half of the total excess hits" (McCrone 1994). According to Dean Radin, the criticism that there "was any one person responsible for the overall results of the experiment...was tested and found to be groundless" (Radin 1997: 221). His source for this claim is a 1991 article by Jahn et al. in the Journal of Scientific Exploration, "Count population profiles in engineering anomalies experiments" (5:205-32). However, Jahn gives the data for his experiments in Margins of Reality: The Role of Consciousness in the Physical World (Harcourt Brace, 1988, p. 352-353). McCrone has done the calculations and found that 'If [operator 10's] figures are taken out of the data pool, scoring in the "low intention" condition falls to chance while "high intention" scoring drops close to the .05 boundary considered weakly significant in scientific results."
This doesn't make any sense. First of all, PEAR conducted REG experiments on a myriad of varying mediums, so these disparate experiments ought to be considered individually for an effect, and then looked at together to see the net effect. The way the article masks this point makes it sound like PEAR only conducted a single type of REG experiment, which is not at all true. They're basically misrepresenting the nature of the actual research done.
And Margins of Reality was only ever intended to be a book for a popular audience--it was never meant as some end-all-be-all of PEAR's research. That they're citing a guy from the 1980s to "debunk" research that was ongoing into the 2000s should be very telling. (This claim is specifically cited in the Wikipedia article as some of the only evidence that PEAR exhibited any bias or methodological flaws.)
But the nonsense goes on
- If [operator 10's] figures are taken out of the data pool, scoring in the "low intention" condition falls to chance while "high intention" scoring drops close to the .05 boundary considered weakly significant in scientific results.
It's well established in psychic research that some individuals are simply extraordinary performers--they have more well-developed PK abilities that the population mean, in other words. Suggesting we arbitrarily remove the highest performer from the data just for shits and giggles because it would make PK less likely isn't a scientific argument. But even so: "high intention scoring drops close to the .05 boundary considered weakly significant in scientific results." Even doing what they imagine, we still find an above chance correlation for "high intention"...but they don't even attempt to explain this, even after arbitrarily playing with the data to curve fit it into their own model--based on a PERSONAL cognitive bias and preconceived notion of what "ough to" happen. This isn't fucking science. And again, this ONLY speaks of PEAR's research published in Margins of Reality in 1987!!! This is not an overarching criticism of PEAR--it is a comment dealing only with the first 8 or so years of their research: AND THEY RAN FOR 28 YEARS!
PEAR got started after an undergraduate came to Jahn in the 1970s and the two of them first decided to study PK...which they found evidence for. This research is simply never mentioned. The lab manager of PEAR, Brenda Dunne, was a graduate student at U Chicago who had conducted her own independent research to replicate the remote viewing experiments at SRI in the early 1970s. This is also never mentioned. In 1998, PEAR spawned the Global Consciousness Project, which continues on today, claiming too to find above chance results. This research is also ignored.
- In 1987, Dean Radin and Nelson did a meta-analysis of all RNG experiments done between 1959 and 1987 and found that they produced odds against chance beyond a trillion to one (Radin 1997: 140). This sounds impressive, but as Radin says “in terms of a 50% hit rate, the overall experimental effect, calculated per study, was about 51 percent, where 50 percent would be expected by chance” [emphasis added] (141). A couple of sentences later, Radin gives a more precise rendering of "about 51 percent" by noting that the overall effect was "just under 51 percent." Similar results were found with experiments where people tried to use their minds to affect the outcome of rolls of the dice, according to Radin. And, when Nelson did his own analysis of all the PEAR data (1,262 experiments involving 108 people), he found similar results to the earlier RNG studies but "with odds against chance of four thousand to one" (Radin 1997: 143). Nelson also claimed that there were no "star" performers.
"This sounds impressive"?! I think it is impressive. Oh, it's only a bit less than 51%...so what? If it's a scientifically and empirically verified result, it merits SOME fucking explanation. If that explanation is simply experimenter bias, EXPLAIN HOW THAT HAPPENED. They offer no causal account of how the data collection process became tainted--it's just a bunch of idle speculation that actually runs counter to what we know the facts of the situation are.
That the entire article is based on emotional appeals rather than well-researched scientific studies is what really did it for me. PEAR attacks its critics by throwing DATA at them: THIS IS WHAT WE DID. This article is an embarrassment and ought to either be removed as a source or the relevant portions of the article ought to be significantly reworked since it grossly misrepresents Princeton Engineering Anomalies Research. Citing TWO studies as "evidence" against 60+ studies--when what is actually needed is a statistical meta-analysis...all of which come out in PEAR's favor--is not science. It's theology.
Neuroscience325 (talk) 11:53, 30 May 2015 (UTC)
- tl;dr. But The Skeptic's Dictionary is a well know and respected source for identifying woo of all types. You will need to establish that actual reliable sources and not just you have found issue with their work. -- TRPoD aka The Red Pen of Doom 17:41, 3 June 2015 (UTC)
In other words, you want clearly incorrect information to stay inside the Wikipedia article.
You're like a 5 year old kid who got beat up for his lunch money! You're adorable!
Neuroscience325 (talk) 14:27, 25 June 2015 (UTC)
Introduction not true
The introduction is not true. The PEAR lab was set up to test the well-recognized phenomenon of "The Observer Effect" in physics. — Preceding unsigned comment added by 24.63.50.134 (talk • contribs)
- Source? --Hob Gadling (talk) 13:53, 7 June 2016 (UTC)
2006 article in the lead
Re this revert, please respond to my edit comment. To expand on it,
- Per the neutral point of view policy (WP:NPOV), we avoid presenting a false balance between mainstream and fringe views (WP:FALSEBALANCE), particularly with regard to pseudoscience (WP:PSCI).
- Independent coverage is needed (WP:FRIND). The paper in question was co-written by PEAR associates.
- Even if independent coverage were found, the new material would need to be added to the body before adding it to the lead.
Manul ~ talk 19:51, 17 January 2017 (UTC)
- The reversion here is entirely justified, for various reasons beyond those noted above. For example, the cited JSE paper doesn't challenge the pseudoscience characterization in the previous sentence at all. It simply responds to another JSE paper (Schub 2006), which had criticized an earlier Radin/Nelson meta-analysis. Also, since we're not addressing anything about the Schub work, why would we even include this citation? (In fact, since the Schub work seems to offer some additional insights into the limitations of the PEAR methodology, it might be worth citing in the text somewhere. But that's a completely different matter.) Meanwhile, the lead section should certainly be left as-is. jxm (talk) 01:37, 18 January 2017 (UTC)
Philosophy 101
"attempting to place the white labcoat of science over cockamamie woo is what pseudoscience is and is what PEAR was doing."
"that PEAR tried to cloak its mumbo jumbo under the labcoat of science is covered in the general dismissal as pseudoscience. that some people want to add a layer of quantum mysticism over the labcoat is hardly groundbreaking nor particularly informative."
Even for purely historical purposes, let us suppose the following definitions to establish what science has been.
Natural philosophy: Philosophy outside theology Natural science: A species of natural philosophy
Philosophers tend to assert things without evidence--philosophy is more akin to storytelling than so-called materialistic science. Still, all science retains this storytelling capacity, if only at a primitive level of analysis, because data without a source is meaningless.
PEAR asked the question, "What data does human consciousness produce?"
I don't have an answer to that question because I am unsure whether it is scientifically meaningful.
But "Do thermodynamic anomalies demonstrate free will?" is scientifically meaningful (by my own standard).
Admitting that the above standard is subjective (that I cannot demonstrate my own free will scientifically), I pose a third research question, that PEAR did ask: "What is a statistical baseline?"
Does anybody see the humor in this mode of thinking? Or is it only me?
Just remember, some people like their technology. Other people love their technology.
I apologize for the riddles, but to recall the words of Dr. E. Schrodinger, "Science is a game--but a game with reality."
YourThoughts (talk) 10:44, 14 March 2017 (UTC)
- Could you please frame your contribution as an intelligible suggestion for improving the article? If you cannot, you are wrong here and should go to a forum instead. --Hob Gadling (talk) 11:39, 14 March 2017 (UTC)
- Yes. So, the article suffers from a few weaknesses in its present state--mainly, a lack of internal cohesion.
- My main qualm is this: PEAR's research is openly derided based on opinion pieces, while neutral scientific journalism (i.e., the NY times) is belittled.
- To make the PEAR article more intelligible, it would perhaps be worth remembering how Wikipedia defines science, "a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe." (https://en.wikipedia.org/wiki/Science)
- PEAR was a research program that dealt in the technology of statistics. That is it. They used physical devices to process their data. Perhaps a statistical methodology may be flawed in its design, but to openly call PEAR pseudoscience risks spoiling the entire science game.
- Why is the Large Hadron Collider not labeled a pseudoscience experiment? (Sources do not agree with such a characterization; is it inappropriate to question--or critically analyze--the sources referenced?)
- Sorry, not sure I understand what it is you want. The first two citations in the article are to the University of Chicago Press and the New York Times, and those (as well as the remainder of our sources) accurately describe PEAR as research into parapsychology rather than the technology of statistics. - LuckyLouie (talk) 13:13, 14 March 2017 (UTC)
- Why do you think that articles listing serious weaknesses of the PEAR studies, as this source does, are "opinion pieces"? --Hob Gadling (talk) 16:01, 14 March 2017 (UTC)