User talk:JzG/Archive 137
This is an archive of past discussions about User:JzG. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 130 | ← | Archive 135 | Archive 136 | Archive 137 | Archive 138 | Archive 139 | Archive 140 |
Academic journal AfDs
Hey, what I don't get is why you nominate articles on journals that meet NJournals, when by now it should be clear that you're wasting your time as they will at best get a "no consensus", but you ignore AfDs like this one, where the only source is a few lines in a local encyclopedia, so that this misses not only NJournals, but also GNG. I didn't point this out earlier, to avoid being accused of canvassing. Instead of continuing to nominate journals that meet NJournals, you're time is probably better spent by trying to modify NJournals directly. Although, if you look through the talk page history, you see that it failed as a guideline because a group of people (you'd probably belong to that) thought that it was too lenient, whereas another large group of people thought that any journal that is peer-reviewed or can be used as an RS should be included. By now, I doubt that anybody could get consensus for any change in that essay (I for one would do away with the "historical purpose" criterion, but I'm afraid that's not gonna fly either...). Cheers! --Randykitty (talk) 11:41, 3 December 2016 (UTC)
- NJournals is a guide to the kinds of journals likely to be notable, not a policy that mandates that journals which meet it are notable. People are misinterpreting necessary conditions as being sufficient conditions.
- Notability is conferred by coverage in reliable independent sources, and that in turn goes back to WP:V and WP:NPOV. How can we verify that the self-description of a journal is neutral? Being indexed does not confer notability and does not allow us to have a verifiably neutral article, because the journal descriptions are provided by the publisher. Being indexed tells us nothing more than that the journal exists. It would be a valid inclusion criterion for a directory, but Wikipedia explicitly is not a directory.
- Likewise, simply having an impact factor does not confer notability. Do you honestly think that a journal with an impact factor of 0.3 is a notable journal? Think what that IF means: the vast majority of articles published in this journal are never cited at all. We'd reject it as a source here, the journal contains little if any contribution to the scholarly enterprise. These are basically mills for publishing papers to help academics rtetain tenure in a system where the mechanism for assessing academics is utterly broken.
- Most of these journal articles - including the ones WP:POINTily created on journals published by the deleted IGI Global, effectively a vanity press - have absolutely no independent coverage whatsoever. There are no sources for any fact, including even the existence of the journal, which do not track back to the publisher.
- It's like listed buildings. Over 90% of listed buildings are Grade II, and a substantial proportion (I think most, actually) of these are singgle family dwellings, of moderate architectural merit, usually in the context of a conservation area. In order to be notable, a historic building would almost certainly need to be listed, but being listed does not in any way make a house notable.
- Go ahead and proive me wrong. Find me articles providing analytical or review coverage of any of those journals. I looked, I found fuck all, but my Google-fu is weak. Nothing would please me more than for these nto to be directory entries, but nothing I have seen to date persuades me that they are anything else. Guy (Help!) 14:03, 3 December 2016 (UTC)
- "Being indexed tells us nothing more than that the journal exists." That's incorrect. Having a website tells us that a journal exists. Being included in a selective index tells us that a commission of experts judges the journal to be among the most significant in its area. Scopus, the most inclusive of the selective indexes includes about 20,000 journals out of an estimated 100,000. In other words, for every journal included in Scopus, there are 4 that are not included. Sounds pretty impressive to me. And an IF of 0.3 cannot be judged in isolation. For a journal in the life sciences or medicine that is indeed rather low (although most national medical journals have low IFs but have a rather large impact on medical practice in their respective countries). For a mathematics journal 0.3 is actually quite respectable, because there citations usually accrue over a much larger time span than the 2 years used for the IF. Anyway, this is not the place to discuss these issues, that is the talk page of NJournals (or of the academic journals Wikiproject). And I'm still curious why you concentrate your efforts on journals that most editors here think are notable and not on obscure stuff like this one... --Randykitty (talk) 16:42, 3 December 2016 (UTC)
- "Being indexed tells us nothing more than that the journal exists." That's incorrect. Having a website tells us that a journal exists. Being included in a selective index tells us that a commission of experts judges the journal to be among the most significant in its area. Well...no, it doesn't really tell us that, either. Being indexed, even in a "selective" index like Scopus, often means little more than "the journal exists and the publisher bothered to apply for inclusion in the index". Meeting the nominal inclusion criteria is a very low bar, practically speaking. Indexing 20,000 out of 100,000 isn't a meaningful statistic except inasmuch as it illustrates a) how much utter crap there is out there, b) how willing Scopus is to inflate their count of total journals to give the illusion of selectivity, and c) how only twenty thousand or so publishers were willing to apply for inclusion. TenOfAllTrades(talk) 17:25, 3 December 2016 (UTC)
- I'm sorry, but that, too, is incorrect. Sure, many publishers/journals will not apply for indexing in Scopus (or one of the other citation indexes), but only because they know that they'll be rejected. I know for a fact that Scopus (even Scopus...) rejects quite a lot of applications. The vetting by their "Content Selection & Advisory Board" certainly is not a pro forma exercise. See their content coverage guide (which actually talks about "between 80,000 and 300,000 scientific serial publications in existence worldwide"...), their inclusion and post-inclusion re-evaluation criteria, and the introduction to the functioning of their reviewing board. --Randykitty (talk) 17:52, 3 December 2016 (UTC)
- I'm not ignorant of the posted criteria, but I'm also not prepared to ignore how much wiggle room those criteria offer. (Scopus indicates that titles will be "evaluated on" those criteria, but offers no concrete expression of how a pass/fail decision is actually made.) I do wonder, though, the basis for your assertion that "quite a lot" of applications are rejected. If a thousand applications have been rejected – something I suspect would represent a high guess – that's still a 95% pass rate. Do you have some solid, quantitative data on acceptance and rejection numbers, and number of journals dropped on re-review? TenOfAllTrades(talk) 18:42, 3 December 2016 (UTC)
- No, sorry, no hard data. But I suspect that the rejection rate is far more than 5%. I know of several journals that applied and got turned down. But that is my personal knowledge, I have (unfortunately) no sources to back that up. Of course, the Thomson Reuters databases (and MEDLINE, for example) are far more selective (yes, I know they were sold, but I keep forgetting the name of the new owner... :-). --Randykitty (talk) 19:01, 3 December 2016 (UTC)
- I'm not ignorant of the posted criteria, but I'm also not prepared to ignore how much wiggle room those criteria offer. (Scopus indicates that titles will be "evaluated on" those criteria, but offers no concrete expression of how a pass/fail decision is actually made.) I do wonder, though, the basis for your assertion that "quite a lot" of applications are rejected. If a thousand applications have been rejected – something I suspect would represent a high guess – that's still a 95% pass rate. Do you have some solid, quantitative data on acceptance and rejection numbers, and number of journals dropped on re-review? TenOfAllTrades(talk) 18:42, 3 December 2016 (UTC)
- I'm sorry, but that, too, is incorrect. Sure, many publishers/journals will not apply for indexing in Scopus (or one of the other citation indexes), but only because they know that they'll be rejected. I know for a fact that Scopus (even Scopus...) rejects quite a lot of applications. The vetting by their "Content Selection & Advisory Board" certainly is not a pro forma exercise. See their content coverage guide (which actually talks about "between 80,000 and 300,000 scientific serial publications in existence worldwide"...), their inclusion and post-inclusion re-evaluation criteria, and the introduction to the functioning of their reviewing board. --Randykitty (talk) 17:52, 3 December 2016 (UTC)
- "Being indexed tells us nothing more than that the journal exists." That's incorrect. Having a website tells us that a journal exists. Being included in a selective index tells us that a commission of experts judges the journal to be among the most significant in its area. Well...no, it doesn't really tell us that, either. Being indexed, even in a "selective" index like Scopus, often means little more than "the journal exists and the publisher bothered to apply for inclusion in the index". Meeting the nominal inclusion criteria is a very low bar, practically speaking. Indexing 20,000 out of 100,000 isn't a meaningful statistic except inasmuch as it illustrates a) how much utter crap there is out there, b) how willing Scopus is to inflate their count of total journals to give the illusion of selectivity, and c) how only twenty thousand or so publishers were willing to apply for inclusion. TenOfAllTrades(talk) 17:25, 3 December 2016 (UTC)
- "Being indexed tells us nothing more than that the journal exists." That's incorrect. Having a website tells us that a journal exists. Being included in a selective index tells us that a commission of experts judges the journal to be among the most significant in its area. Scopus, the most inclusive of the selective indexes includes about 20,000 journals out of an estimated 100,000. In other words, for every journal included in Scopus, there are 4 that are not included. Sounds pretty impressive to me. And an IF of 0.3 cannot be judged in isolation. For a journal in the life sciences or medicine that is indeed rather low (although most national medical journals have low IFs but have a rather large impact on medical practice in their respective countries). For a mathematics journal 0.3 is actually quite respectable, because there citations usually accrue over a much larger time span than the 2 years used for the IF. Anyway, this is not the place to discuss these issues, that is the talk page of NJournals (or of the academic journals Wikiproject). And I'm still curious why you concentrate your efforts on journals that most editors here think are notable and not on obscure stuff like this one... --Randykitty (talk) 16:42, 3 December 2016 (UTC)
This is missing the point. Scopus is not a reliable independent source because what it says about a journal is supplied by the publisher. I also disagree that it has any degree of discernment. It's sort of like Who's Who: some of the entries are unequivocally notable, but some are pure vanity. Guy (Help!) 23:36, 3 December 2016 (UTC)
- Sorry, but I disagree. Yes, the publisher will provide basic (and absolutely uncontroversial) info like ISSN, frequency, and such. And the publisher will provide Scopus with the articles that it publishes, the abstracts, the citation info, etc. But that is not what makes Scopus important for notability. What counts is that Scopus has a committee of specialists who evaluate each journal for possible inclusion and who only select the most notable ones. Just as WP:PROF assumes notability for a holder of a named chair because we're not going to second-guess the recrutement of a major university for a major position, I don't see why we should second-guess a committee of experts in the case of Scopus. --Randykitty (talk) 08:34, 4 December 2016 (UTC)
- Looks like the acceptance rate of submissions reviewed varies between thirty and sixty percent, with over three thousand submissions per year. --tronvillain (talk) 17:37, 7 December 2016 (UTC)
- No "committee of experts" validates the journal descriptions. I'd be faintly astonished if even one expert reviewed most of the submissions. The experts may well set criteria, but the sheer number of inclusions indicates that a rule-based approach is most likely. And that's beside the point anyway because, and I am sorry to have to repeat it, the information about the journal is supplied by the publishers. There's no way SCOPUS sends out teams to write journal descriptions. So, inclusion in SCOPUS is perfect grounds for a directory entry, but not for an encyclopaedia article. Guy (Help!) 18:06, 7 December 2016 (UTC)
- Nobody needs to "send out" anybody. What you do when you evaluate a journal (for Scopus or anywhere else) is that A/ you access the journal homepage and examine it, b/ you examine content of the journal (not everything, but a random sample), c/ you look at citation data (not something the publisher can provide you with, but you can retrieve this from Scopus), d/ etcetera. Scopus has a large committee of subject experts who in their turn use evaluation reports generated by subject experts that they select, much as an editor selects reviewers for manuscripts. And just as journals that publish thousands of articles yearly manage to find reviewers for all of them, just so Scopus finds evaluators for every journal that applies. And just like editors do with manuscripts, some journals will be rejected out of hand because they obviously do not adhere to some of the criteria, so no reviewers need to be found for those. That's how things like this work, not by applying some automatized algorithm. --Randykitty (talk) 18:16, 7 December 2016 (UTC)
- And they conduct ongoing evaluation of tens of thousands of journals to ensure standards are maintained? Unlikely. But you said it: they view the journal's own self-description. The don't read the journal and write an evaluation. "It's a journal" is as far as it goes. An impact factor of 0.3, as for one example, indicates absurdly broad inclusion criteria and no wider interest in the contents of the journal itself. It exists solely so that academics can build resumes to meet arbitrary criteria. I would say that a bare minimum would be a "most significant paper" type evaluation which showed at least one paper that made an actual difference. Publishing hundreds of me-too papers does not constitute meaningful participation in scholarship. Guy (Help!) 19:15, 7 December 2016 (UTC)
- Sorry, but are you trying to misunderstand me? Read again what I said above. Looking at a journal's self-description is just one thing one does when evaluating a journal (and not the most important thing either). And an IF of 0.3 is perhaps low for, say, neurology, but it is quite high for mathematics or the humanities. and you're remarks about what constitutes scholarship are and about building resumes are quite bleak and not really mainstream, to say the least. --Randykitty (talk) 19:31, 7 December 2016 (UTC)
- And they conduct ongoing evaluation of tens of thousands of journals to ensure standards are maintained? Unlikely. But you said it: they view the journal's own self-description. The don't read the journal and write an evaluation. "It's a journal" is as far as it goes. An impact factor of 0.3, as for one example, indicates absurdly broad inclusion criteria and no wider interest in the contents of the journal itself. It exists solely so that academics can build resumes to meet arbitrary criteria. I would say that a bare minimum would be a "most significant paper" type evaluation which showed at least one paper that made an actual difference. Publishing hundreds of me-too papers does not constitute meaningful participation in scholarship. Guy (Help!) 19:15, 7 December 2016 (UTC)
- Nobody needs to "send out" anybody. What you do when you evaluate a journal (for Scopus or anywhere else) is that A/ you access the journal homepage and examine it, b/ you examine content of the journal (not everything, but a random sample), c/ you look at citation data (not something the publisher can provide you with, but you can retrieve this from Scopus), d/ etcetera. Scopus has a large committee of subject experts who in their turn use evaluation reports generated by subject experts that they select, much as an editor selects reviewers for manuscripts. And just as journals that publish thousands of articles yearly manage to find reviewers for all of them, just so Scopus finds evaluators for every journal that applies. And just like editors do with manuscripts, some journals will be rejected out of hand because they obviously do not adhere to some of the criteria, so no reviewers need to be found for those. That's how things like this work, not by applying some automatized algorithm. --Randykitty (talk) 18:16, 7 December 2016 (UTC)
- The "large committee of subject experts" appears to be sixteen people. --tronvillain (talk) 19:34, 7 December 2016 (UTC)
- Exactly. And like journal editors, these subject specialists will solicit reviews from other specialists. --Randykitty (talk) 20:03, 7 December 2016 (UTC)
- Will they though? Where, exactly, is that specified? How do you know they're doing anything more than a cursory examination? And even if they were, how would have make the thousand or more journals added a year notable? --tronvillain (talk) 22:21, 7 December 2016 (UTC)
- Exactly. And like journal editors, these subject specialists will solicit reviews from other specialists. --Randykitty (talk) 20:03, 7 December 2016 (UTC)
- The "large committee of subject experts" appears to be sixteen people. --tronvillain (talk) 19:34, 7 December 2016 (UTC)
- There are two publications describing scientific magazines, the best known is Katz, William A., and Linda Sternberg Katz. Magazines for Libraries: For the General Reader and School, Junior College, College, University, and Public Libraries. irregularly published but approximately annual. In addition there's a series of long comparative essays in the journal Scientific and Technical Libraries". It would certainly be possible to add all the citation from there, but it's very boring work. There is no publication reviewing new academic journals systematically--CHOICE used to do so, but stopped at least 20 years ago. In the period before there was Journal Citation Reports, there was greater coverage, especially in the variuous subject guides to the literature. But periodicals are rarely reviewed now, because the important information about them is their entry in Journal Citation Reports, which every science library uses as a standard.
- Additional sources can be found for many journals. They tend however to be published in the journal they describe: see for example [1]. For journals involved in a major scandal, there tends to be more, especially when its a major journal, for example New England Journal of Medicine. There are also the sources centering around the predatory journals in Beall's List)--this gives a bit of a problem, because almost all of them are otherwise non-notable. Finding all of these properly would be a major research project.
- There are some areas which WP does not cover well, because the usual GNG guideline does not really apply, and it is difficult to find another generally accepted standard. Scientific societies, business to business companies in unexciting fields, and so on. There are a few such fields which we cover generally under the banner of "presumed to be notable," such as newspapers and broadcast radio stations. There's a special case of compromise by which we cover secondary schools but not elementary schools. There's a few places where we essentially ignore the guideline: populated places for example, and early Olympic athletes. And there's one formal alternative guide, WP:PROF. This is another such place. The field has its own solid standard of unquestionable notability, which is inclusion in Journal Citation Reports in the sciences and social sciences) . This is so very selective that we use a broader standard, also widely and universally recognized by scientists and academic, inclusion in a selective index. There are 3 recurrent questions where there is some doubt: 1/whether Scopus is sufficiently selective 2/whether new journals not yet in the indexes can in some cases be notable because of the notability of the other journals from a particular rigorous publisher, like American Chemical Society or American Physical Society 3/whether journals that are the leading journals in very specialized fields too small for thereto be any selective index can be notable also.
- I will agree that I am a little skeptical about scopus's drive to extend it's list as far as possible. I can see the point of having a very broad index, but some of their decisions seem marginal to me. However, the thousand or so they add yearly are primarily attempts to cover additional languages and additional specialized fields, not to add unimportant journals in the major fields, and they are not the problem. The best way of checking their selectivity, is to look for the journals that they do not cover--using an index like DOAJ for example. DGG ( talk ) 06:44, 9 December 2016 (UTC)
- @DGG: Sure, but the problem for me is that a lot of these journals are known to be substandard. The problem of the drive to publish in academia is well documented, the result is that most published research is never read let alone referenced, and a lot of these journals exist solely to feed this market. You know about predatory open access publishing, but there are journals which are nearly as bad which don't make Beall because they aren't open access. IGI Global is a problem publisher. Its main business appears to be eye-wateringly expensive books which sell a handful of copies to reference libraries (I encountered this while cleaning up the vanity spammming of Jonathan Bishop, a man with no actual academic standing who has authored a book published by IGI and papers published in their journals). Obviously any article on journals like this needs to focus on their actual significance and publishing practices - high author fees, low standards of acceptance, extremely low citation rates and so on - and we would do that by reference to reliable independent sources, but there are none. Nobody is discussing these journals and their objective merit. The only sources are effectively directories, and being in the directory is accepted by journal fans as proof positive of notability. So we fail WP:NPOV, which is the most important policy we have IMO. Guy (Help!) 09:09, 9 December 2016 (UTC)
- @DGG: see also Explore: The Journal of Science & Healing. Every reality-based source discussing this journal, identifies it as abysmal. Example [2], "Explore: The Journal of Science and Healing is a journal known for its publication of truly ridiculous studies. Perhaps my favorite of the bunch is from a few years ago and involved looking at whether positive “intent” could be embedded in chocolate". However, these sources are largely blogs and the like so we can't use them, leaving us in a cleft stick: we have no reliable independent sources we can use to give a realistic description of the journal, but the article cannot be deleted because presence in indexes is asserted to be categorical proof of notability. Thus the only description we have for the journal is the self-description supplied to the indexes by the publisher, which (unsurprisingly) fails to mention its reputation for publishing abject nonsense, or indeed the fact that its co-editor in chief is a notorious crank. In fact we have a minor edit war right now because the people who want to keep the journal based on WP:ITSINDEXED, also don't want to include the fact that Dean Radin is identified as co-editor in chief. Which is really weird. It's as if the journals project wants to mirror SCOPUS in containing absolutely no critical commentary whatsoever, but that would be stupid and I can't imagine it to be the case. Guy (Help!) 15:19, 9 December 2016 (UTC)
Guy, saying "journal fans" is unfair. That pre-supposes an uncritical approach. Many of those writing articles on journals don't have a "fan" agenda. They are simply trying to write about journals. There are many notable journals still missing, btw. Your focus on the substandard journals and the journals being used to push an agenda might be colouring your views here. An example of a missing journal in the history field is Guerres mondiales et conflits contemporains. The most you'll find on it is over on the French Wikipedia at fr:Comité d'histoire de la Seconde Guerre mondiale. Things get confused when journals change name and scope. Journals with a long history can sometimes be easier to write about, but even then the history will tend to be published by the journal itself to mark certain milestone anniversaries. Two examples: Astronomische Nachrichten and Annales de chimie et de physique. Carcharoth (talk) 16:20, 9 December 2016 (UTC)
Maybe another way to look at it is whether you would agree that journals on this list and on this list (Arts and Humanities Citation Index) are notable? More on CAIRN here. I also think you are overstating the problem in relation to the "drive to publish in academia". A lot of papers are only read or referenced a few times, but sometimes that is enough. And it is many of those obscure papers that are useful for Wikipedia. Characterising that sort of work as simply existing to provide academics with a publication history and to feed a market, is offensive. Some of that does happen, but there are thousands of academics producing low-level, yet still useful output. And some of these more obscure journals provide a platform for that sort of work. The journals themselves might not be notable, but they shouldn't be lumped with the 'problem' journals that you are concerned with. There are: (i) notable and prestigious journals; (ii) obscure but perfectly respectable journals; and (iii) the problem journals you correctly identify. Carcharoth (talk) 16:46, 9 December 2016 (UTC)
- Fair point, it's lazy. What I mean is, editors who are trying to build a comprehensive directory of journals.
- And yes, my issues with use of crap journals to insert crap content into Wikipedia does indeed colour my views. Guy (Help!) 17:58, 9 December 2016 (UTC)
Explore: The Journal of Science & Healing
Given Elsevier's history of publishing sponsored journals I cannot help but wonder, who is paying for Explore: The Journal of Science & Healing? — Cheers, Steelpillow (Talk) 14:43, 9 December 2016 (UTC)
- That was the action of a rogue employee. None of those fake journals ever got into MEDLINE or Index Medicus, so the cases really are not comparable. --Randykitty (talk) 15:04, 9 December 2016 (UTC)
- The truth is that Explore is a pet project of Dossey and Radin and they use it as a hobby-horse to promote abject nonsense like "intent" and "distant healing". It's the difference between astronomy and astrology. Explore is a journal of pseudomedicine used by proponents of refuted twaddle to get their marketing claims fact-washed into the mainstream. The "systematic review" of Emotional Freedom Techniques is a perfect example. As Jimbo memorably put it, it's not our job to promote the work of lunatic charlatans. Guy (Help!) 15:11, 9 December 2016 (UTC)
- One wishes that Elsevier were as protective of their scholarly reputability. — Cheers, Steelpillow (Talk) 15:22, 9 December 2016 (UTC)
- It's certainly hard to see why they would have anything to do with an editor-in-chief who has compared teaching the scientific method to institutionalised child abuse: "the way kids are taught science these days constitutes a form of child abuse. It involves the forced infliction of a false identity" - the supposedly "false identity" is of course empiricism, the terribly conspiracy to portray empirically verifiable fact as the only proper basis of Scientific Truth™. Obviously the only way to fix this is to give full parity between facts and bullshit, as Dossey himself does in Explore (only without the facts). Guy (Help!) 15:29, 9 December 2016 (UTC)
- Ahhh, the truth!! Any reliable sources for that? Then please add them to the article. Thanks. --Randykitty (talk) 15:35, 9 December 2016 (UTC)
- Dossey's opinion on science is a matter of record, see for example this HuffPo piece: [3]. He basically doesn't believe in the scientific methoid, for the same reason that Sheldrake doesn't: it fails to support his fervent beliefs. Guy (Help!) 16:24, 9 December 2016 (UTC)
- It doesn't matter. That belongs in a bio of Dossey (if he has one and is notable enough - actually, just looked at that HuffPo piece: it doesn't even mention Explore). It only belongs in the article on the journal if there's a reliable source documenting that his ideas have influenced the journal in some significant way. Just as we WP editors are supposed to edit in a neutral way, regardless of our own opinions, just so he might be editing Explore (I'm not saying this is the case and I have no personal opinion about this either, but as long as we don't have sources establishing biased editing, it does not belong here). --Randykitty (talk) 16:31, 9 December 2016 (UTC)
- I am not advocating including it in the article on Explore. I am, however, pointing out that Dossey is a borderline crank, and Radin is an outright crank. The journal publishes papers supporting the ideology of the cranks who edit it. Some of these are hilariously poor. Some are rank pseudoscience (e.g. [4]). And because there are essentially no reliable independent sources about the journal, we are going to struggle to make the article NPOV. Do you understand my problem here? I actually think this journal is notable, as a major publisher of abject nonsense, one of the few unashamedly lunatic fringe journals published by a major publisher, but I am struggling to source the reality-based perspective on it because most sources simply ignore it. Guy (Help!) 18:04, 9 December 2016 (UTC)
- It doesn't matter. That belongs in a bio of Dossey (if he has one and is notable enough - actually, just looked at that HuffPo piece: it doesn't even mention Explore). It only belongs in the article on the journal if there's a reliable source documenting that his ideas have influenced the journal in some significant way. Just as we WP editors are supposed to edit in a neutral way, regardless of our own opinions, just so he might be editing Explore (I'm not saying this is the case and I have no personal opinion about this either, but as long as we don't have sources establishing biased editing, it does not belong here). --Randykitty (talk) 16:31, 9 December 2016 (UTC)
- Dossey's opinion on science is a matter of record, see for example this HuffPo piece: [3]. He basically doesn't believe in the scientific methoid, for the same reason that Sheldrake doesn't: it fails to support his fervent beliefs. Guy (Help!) 16:24, 9 December 2016 (UTC)
- Ahhh, the truth!! Any reliable sources for that? Then please add them to the article. Thanks. --Randykitty (talk) 15:35, 9 December 2016 (UTC)
- It's certainly hard to see why they would have anything to do with an editor-in-chief who has compared teaching the scientific method to institutionalised child abuse: "the way kids are taught science these days constitutes a form of child abuse. It involves the forced infliction of a false identity" - the supposedly "false identity" is of course empiricism, the terribly conspiracy to portray empirically verifiable fact as the only proper basis of Scientific Truth™. Obviously the only way to fix this is to give full parity between facts and bullshit, as Dossey himself does in Explore (only without the facts). Guy (Help!) 15:29, 9 December 2016 (UTC)
- One wishes that Elsevier were as protective of their scholarly reputability. — Cheers, Steelpillow (Talk) 15:22, 9 December 2016 (UTC)
- The truth is that Explore is a pet project of Dossey and Radin and they use it as a hobby-horse to promote abject nonsense like "intent" and "distant healing". It's the difference between astronomy and astrology. Explore is a journal of pseudomedicine used by proponents of refuted twaddle to get their marketing claims fact-washed into the mainstream. The "systematic review" of Emotional Freedom Techniques is a perfect example. As Jimbo memorably put it, it's not our job to promote the work of lunatic charlatans. Guy (Help!) 15:11, 9 December 2016 (UTC)
Yes, I see the problem. But unless there are reliable sources that say these things, your or my personal opinion should not influence the content of the article. This is a general problem with fringe science, of course. However, if this journal really is so lunatic, somebody must have written about it (and not just David Gorski on his blog). I did some searching but couldn't find anything that specifically criticizes the journal (although there's a lot about Dossey). There's at this point one critical source in the article. It's a blog post, but the author can be considered an expert in this field so I think it's admissible. There's no independent source praising the journal. Unless more good critical sources can be found, I'm afraid that this is all we can do for now. I repeat, we cannot go beyond the sources (per the second of our WP:Five pillars). --Randykitty (talk) 18:32, 9 December 2016 (UTC)
- Here is a critical mention: [5] (just search the page for "Explore"). Not really an RS, though. Of course, if somebody reputable *has* criticised the poor thing, Elsevier have probably hidden it behind a paywall.... — Cheers, Steelpillow (Talk) 19:23, 9 December 2016 (UTC)
- That is exactly my point. There are no reliable independent secondary sources discussing the journal at all, so we can't write a verifiably neutral article. There's a small amount of commentary from David Gorski on the Science Based Medicine site to slightly balance the puffery of its self-description, and that's it. So: we have a journal that publishes a torrent of abject nonsense, including uncritical coverage of rank pseudoscience, and your preferred inclusion policy means we have pretty much nothing to say about it beyond its own PR. Guy (Help!) 23:34, 9 December 2016 (UTC)
- We need independent RS to meet the GNG, if we are going to use the GNG as the criterion (as I've , we may use other criteria they are suitable and win acceptance( ass do the traditional standards for journals. But for content of the article, we just need RSs. In many cases primarily sources even connected with the subject are suitable, and can even override secondary sources in some cases. For journals, the primary sources. generally the mast head of he the journal itself; the secondary sources are yjr cataloging record for the journal from the national library--which will always be available, the listing in Ulrich's directory, which will always be available for the sort of journal we are considering, and especially the information in WoS / Scopus. All these are standard sources and considered reliable. (Reliably up to a point--serials cataloging is a ver arcane and specialized art, and there are errors in all records sources. Librarians deal with this all the time, thou resolving some oft he more complicated cases can be very difficult. This is do different from every other field of human endeavor. for example, the indexing data must be derived form the indexes themselves--publishers are not at all reliable here, and often Ulrich's is also --- DGG ( talk ) 10:10, 10 December 2016 (UTC) .
- Sure, but what do we do when these primary sources - which are largely the only sources that are going to get accepted - are blatantly at odds with reality? Explore is a crank journal, it's never going to admit this in its masthead and the publisher is never going to describe it to an index as the International Journal of Ridiculous Nonsense either. Guy (Help!) 13:03, 10 December 2016 (UTC)
- We need independent RS to meet the GNG, if we are going to use the GNG as the criterion (as I've , we may use other criteria they are suitable and win acceptance( ass do the traditional standards for journals. But for content of the article, we just need RSs. In many cases primarily sources even connected with the subject are suitable, and can even override secondary sources in some cases. For journals, the primary sources. generally the mast head of he the journal itself; the secondary sources are yjr cataloging record for the journal from the national library--which will always be available, the listing in Ulrich's directory, which will always be available for the sort of journal we are considering, and especially the information in WoS / Scopus. All these are standard sources and considered reliable. (Reliably up to a point--serials cataloging is a ver arcane and specialized art, and there are errors in all records sources. Librarians deal with this all the time, thou resolving some oft he more complicated cases can be very difficult. This is do different from every other field of human endeavor. for example, the indexing data must be derived form the indexes themselves--publishers are not at all reliable here, and often Ulrich's is also --- DGG ( talk ) 10:10, 10 December 2016 (UTC) .
Protected page
Hi JzG,, i want to create Kim Jae-hwan (badminton) and Goh Giap Chin.. Can you help me to open the access.. Thanks --Stvbastian (talk) 16:39, 10 December 2016 (UTC)
Maslowsneeds
Maslowsneeds (talk · contribs), who you blocked for 48 hours for violating a WP:TBAN, violated it again. – Muboshgu (talk) 19:25, 10 December 2016 (UTC)
- It's been dealt with. – Muboshgu (talk) 19:39, 10 December 2016 (UTC)