Wikipedia:Bots/Noticeboard/Archive 11
This is an archive of past discussions about Wikipedia:Bots. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current main page. |
Archive 5 | ← | Archive 9 | Archive 10 | Archive 11 | Archive 12 | Archive 13 | → | Archive 15 |
New Task Idea?
Hey all, I'm kinda new to this bot thing, but I have an idea for a new task, and it might or might not be good. The reason I'm not asking at Bot Requests is because me and that page have some bad history between us about 2-3 years ago. I'm nervous to see what would happen if I went over there again so I'm posting it here. It's CHECKWIKI Task 64, "Link equal to linktext" ([[Apple|apple]] --> [[Apple]]). The reason I'm asking is because normally FrescoBot does Task 64, but looking at its contribs, it only seems to edit every 2-3 days, and only edits 20-30 articles at a time. If you look at here, you can see that there are 2,100+ articles with that error. Editing 2,100+ articles to remove that error is something that is tedious to do manually, so I am asking if this is a good idea. I know PrimeBOT uses AWB to edit, and is run manually by Primefac, so I thought it might be a good idea to do something like that. Please let me know what you think. Please leave constructive criticism only please. Thanks! Yoshi24517Chat Very Busy 04:05, 17 March 2017 (UTC)
- This error looks like it falls under WP:NOTBROKEN and WP:COSMETICBOT. Also, I believe your example is incorrect – the capitalization is altered, which a bot would not be allowed to do. I also don't see which BRFA has FrescoBot approved for this task. Did you mean something else, not what CheckWiki 64 lists? — HELLKNOWZ ▎TALK 12:13, 17 March 2017 (UTC)
- @Hellknowz: I forgot to tell you that it would be supervised. I was gonna watch it make its edits. It won't be automatic. And to answer your other question, this shows FrescoBot being able to take care of Task 64. Other things you mentioned: I am trying to make each link have one, not two, so it would not fall under WP:NOTBROKEN. {Meaning just leave it with [[apple]]). Second, what you said about it going under WP:CONTEXTBOT, as I explained above, it would be supervised. I can see it fails WP:COSMETICBOT though. Just a thought, every time it would run and fix all the articles with the Task 64 error, should I have it clean up the whole article at the same time, like AWB normally does when you load a new page? Just a thought. Yoshi24517Chat Very Busy 15:12, 17 March 2017 (UTC)
- That list shows FrescoBot as doing that task, but I could not find any BRFA for it. Either it's listed there by mistake, was meant in regards to similar tasks, or it's doing an unapproved task (or I failed to find the BRFA). I don't think we have any bot approved to do this as the main task rather than just an addition to a substantive edit.
- Converting apple ([[Apple|apple]]) to apple ([[apple]]) is WP:COSMETICBOT. In fact, AWB genfixes have other cosmetic edits. Cosmetic tasks aren't allowed by themselves. Even with other substantive edits, it has recently been very contentious.
- Also, whether you watch the task or not, doesn't really change matters. Even if you repeatedly do it manually, it would still be under bot policy via WP:MEATBOT. — HELLKNOWZ ▎TALK 15:26, 17 March 2017 (UTC)
- @Hellknowz: I forgot to tell you that it would be supervised. I was gonna watch it make its edits. It won't be automatic. And to answer your other question, this shows FrescoBot being able to take care of Task 64. Other things you mentioned: I am trying to make each link have one, not two, so it would not fall under WP:NOTBROKEN. {Meaning just leave it with [[apple]]). Second, what you said about it going under WP:CONTEXTBOT, as I explained above, it would be supervised. I can see it fails WP:COSMETICBOT though. Just a thought, every time it would run and fix all the articles with the Task 64 error, should I have it clean up the whole article at the same time, like AWB normally does when you load a new page? Just a thought. Yoshi24517Chat Very Busy 15:12, 17 March 2017 (UTC)
PGC
Hi. I have an idea to create articles about PGC objects. I have full list like table with characteristics of this objects. Is that possible? I'm not from Engish Wikipedia, I'm just asking, is that possible. Thank for response. --Artificial123 (talk) 06:59, 17 March 2017 (UTC)
- What is PGC? --Redrose64 🌹 (talk) 10:16, 17 March 2017 (UTC)
- If it is about Principal Galaxies Catalogue see Category:PGC objects. There are already over 500 articles. StarryGrandma (talk) 11:45, 17 March 2017 (UTC)
- Artificial123, you might want to read WP:NASTRO and make a post at the Astronomy WikiProject before jumping into this. I have a vague recollection of this being discussed (and rejected) before, but I could be wrong. Primefac (talk) 12:01, 17 March 2017 (UTC)
- Primefac, I don't talk about English Wikipedia. I just wanna to have an answer on my question: is that possible? File is similar to xls-table, where are name, right ascension, declination and some other parameters. --Artificial123 (talk) 12:07, 17 March 2017 (UTC)
- Well, if you're not talking about editing the en-Wikipedia, then our policies (or our opinions) don't really matter. Primefac (talk) 12:12, 17 March 2017 (UTC)
- Primefac, I don't talk about English Wikipedia. I just wanna to have an answer on my question: is that possible? File is similar to xls-table, where are name, right ascension, declination and some other parameters. --Artificial123 (talk) 12:07, 17 March 2017 (UTC)
- Artificial123, you might want to read WP:NASTRO and make a post at the Astronomy WikiProject before jumping into this. I have a vague recollection of this being discussed (and rejected) before, but I could be wrong. Primefac (talk) 12:01, 17 March 2017 (UTC)
- If it is about Principal Galaxies Catalogue see Category:PGC objects. There are already over 500 articles. StarryGrandma (talk) 11:45, 17 March 2017 (UTC)
- I don't see why it wouldn't be technically possible. — HELLKNOWZ ▎TALK 12:17, 17 March 2017 (UTC)
- Hellknowz, can you see how difficult is that? I will send the list. --Artificial123 (talk) 12:46, 17 March 2017 (UTC)
- Not quite sure what would "sending the list" achieve exactly, at least at this stage. If you are planning on creating articles on a large scale, make sure you are familiar with WP:MASSCREATION and WP:NASTRO. You will also want the people at WP:ASTRO to give their feedback on your proposal/idea, since they will know a lot more about the PGC catalogue than most people. Headbomb {talk / contribs / physics / books} 17:01, 17 March 2017 (UTC)
- I repeat: I won't do that in English Wikipedia. --Artificial123 (talk) 18:18, 17 March 2017 (UTC)
- Not quite sure what would "sending the list" achieve exactly, at least at this stage. If you are planning on creating articles on a large scale, make sure you are familiar with WP:MASSCREATION and WP:NASTRO. You will also want the people at WP:ASTRO to give their feedback on your proposal/idea, since they will know a lot more about the PGC catalogue than most people. Headbomb {talk / contribs / physics / books} 17:01, 17 March 2017 (UTC)
- Artificial123, are you asking if this group could tell you if it is possible for a bot to take data from the list and write, or at least start, articles for the Belarusian Wikipedia? And tell you how to write such a bot. StarryGrandma (talk) 23:40, 17 March 2017 (UTC)
- StarryGrandma, I'm asking if it possible to take data in a good way. --Artificial123 (talk) 07:11, 18 March 2017 (UTC)
- Yes, it's possible. (Please mind the gap between "possible" and more favorable words, such as "likely" or "desirable".)
- However, I have two alternative suggestions for you: the first is to have this tool enabled on your wiki. People will get information, but you won't have "an article" (that will be incomplete and probably never updated). The other is that you create an infobox that draws its contents straight out of Wikidata (importing whatever is missing to Wikidata), so that you can bot-create a sentence or two but still benefit from updated information.
- (For anyone who is not familiar with Wikidata, then I recommend File:A Gentle Introduction to Wikidata for Absolute Beginners (including non-techies!).webm (also available on YouTube; the link is in the file description page).) WhatamIdoing (talk) 22:54, 20 March 2017 (UTC)
- StarryGrandma, I'm asking if it possible to take data in a good way. --Artificial123 (talk) 07:11, 18 March 2017 (UTC)
Identifying cosmetic fixes at Checkwiki
There is currently an effort to identify which WP:CWERRORS should be considered cosmetic/which aren't. Help and feedback would be appreciated. Headbomb {t · c · p · b} 12:57, 31 March 2017 (UTC)
In case you haven't noticed, there is currently an RFC on the proposed update on our WP:COSMETICBOT policy. Headbomb {t · c · p · b} 13:00, 31 March 2017 (UTC)
Checklinks
Is broken.
- 73 added, 16 in error.
- 36 added, 18 in error
- 6 added, 4 in error.
- 56 added, 56 in error.
- 14 added, 9 in error.
- 15 added, 11 in error.
- 147 added, 147 in error
It was reported a month ago and also (in the wrong place) here. -- GreenC 20:28, 24 March 2017 (UTC)
- So, at this point Checklinks is causing more harm then benefit to the project with 50% or more error rate of every change it makes. It's a significant burden on my time to clean these edits up. Any help would be appreciated. Either in repairing the damage, or shutting it down until it can be fixed. There's also a bot war situation going on (see edits above). -- GreenC 04:46, 30 March 2017 (UTC)
- @Green Cardamom: I only had a few mins to look this morning, but these edits appear to be getting made by editors - not bots; is there a bot that is making bad edits here? — xaosflux Talk 12:05, 30 March 2017 (UTC)
- Well it's a Tool .. machine-generated diffs. Theoretically users are supposed to check before saving, but evidently not. Do tools fall under the same guidelines as bots? If not, what guidelines do Tools use? -- GreenC 15:09, 30 March 2017 (UTC)
- All edits are the responsibility of the one who makes them, regardless of the tool. — HELLKNOWZ ▎TALK 15:50, 30 March 2017 (UTC)
- I think the point is the tool is no longer reliable and the people using it are blindly trusting it.—CYBERPOWER (Chat) 16:00, 30 March 2017 (UTC)
- This is even understandable, to verify external links takes a lot of time effort, the very reason they are using the tool to save that effort. -- GreenC 16:03, 30 March 2017 (UTC)
- All edits are the responsibility of the one who makes them, regardless of the tool. — HELLKNOWZ ▎TALK 15:50, 30 March 2017 (UTC)
- Well it's a Tool .. machine-generated diffs. Theoretically users are supposed to check before saving, but evidently not. Do tools fall under the same guidelines as bots? If not, what guidelines do Tools use? -- GreenC 15:09, 30 March 2017 (UTC)
- If it is really causing that much damage, and given editors don't seem to check their edits much with an automated tool, what can go wrong?, the needs to be shut down, or significantly updated to today's standards.—CYBERPOWER (Chat) 13:58, 31 March 2017 (UTC)
Pinging Dispenser. --Edgars2007 (talk/contribs) 14:39, 31 March 2017 (UTC)
- When I spoke with Dispenser on IRC a few months ago he said he has doesn't have the time to fix the tool. Since then he has not responded to error reports on his talk page (see links above) and I doubt will respond here. At the same time, we have a new tool that does the same thing (and more) so it would be no loss to the community if the tool were retired. -- GreenC 23:51, 4 April 2017 (UTC)
- @Green Cardamom: what do you propose we DO? This external tool is not a bot or used by bots - if you want something like an edit filter that blocks Checklinks edits, it is going to need a bigger venue then WP:BOWN. — xaosflux Talk 03:31, 5 April 2017 (UTC)
- I for one wouldn't mind an edit filter to do that. Actually, it's much simpler. The OAuth consumer can be disabled. The tool will not function without OAuth.—CYBERPOWER (Chat) 08:57, 5 April 2017 (UTC)
- Get a discussion open at User talk:Dispenser/Checklinks and advertise it to VP - will need community baking to get stewards to shut down someones consumer. — xaosflux Talk 11:19, 5 April 2017 (UTC)
- Suggestion: I will try to get ahold of Dispenser via IRC today. If that doesn't work leave a message of intended action on his talk page. If that doesn't work start an RfC to disable the tool. If after 30 days there is consensus, and Dispenser doesn't comply, start the process to disable the consumer which actually would disable all his tools I think. Not the intention but only way it sounds like. Hopefully won't go that far. -- GreenC 14:59, 5 April 2017 (UTC)
- Get a discussion open at User talk:Dispenser/Checklinks and advertise it to VP - will need community baking to get stewards to shut down someones consumer. — xaosflux Talk 11:19, 5 April 2017 (UTC)
- I for one wouldn't mind an edit filter to do that. Actually, it's much simpler. The OAuth consumer can be disabled. The tool will not function without OAuth.—CYBERPOWER (Chat) 08:57, 5 April 2017 (UTC)
- @Green Cardamom: what do you propose we DO? This external tool is not a bot or used by bots - if you want something like an edit filter that blocks Checklinks edits, it is going to need a bigger venue then WP:BOWN. — xaosflux Talk 03:31, 5 April 2017 (UTC)
Dispenser has disabled the "save page" portion of Checklinks on enwiki until he can update the code. -- GreenC 18:05, 5 April 2017 (UTC)
If you're making a bot-related presentation at Wikimania 2017 in Montreal, advertise it here!
I'll be making at least two, assuming they are accepted, one on Article Alerts, the other on Journals Cited by Wikipedia (and possibly a third one on bots in general). If you are interested in attending, please sign up! Headbomb {t · c · p · b} 12:59, 7 April 2017 (UTC)
Bots Newsletter, April 2017
Bots Newsletter, April 2017 | |
---|---|
Greetings! The BAG Newsletter is now the Bots Newsletter, per discussion. As such, we've subscribed all bot operators to the newsletter. You can subscribe/unsubscribe from future newsletters by adding/removing your name from this list. Highlights for this newsletter include:
Magioladitis ARBCOM case has closed. The remedies of the case include:
We currently have 27 open bot requests at Wikipedia:Bots/Requests for approval, and could use your help processing!
There are multiple ongoing discussions surrounding bot-related matters. In particular:
Several new things are around:
Wikimania 2017 is happening in Montreal, during 9–13 August. If you plan to attend, or give a talk, let us know! Thank you! edited by:Headbomb 11:35, 12 April 2017 (UTC) (You can unsubscribe from future newsletters by removing your name from this list.) |
Wikibits.js being removed on 25 April 2017
Some old Javascript, some of which has been deprecated for more than five years, is being removed later this month. Some old scripts may need to be updated. If you think that this might be relevant to your code, then please see https://lists.wikimedia.org/pipermail/wikitech-ambassadors/2017-April/001574.html for more details (including links that show how to fix most of the code). Whatamidoing (WMF) (talk) 19:25, 12 April 2017 (UTC)
ClueBot III misbehaving, assistance needed
Can anybody please assist Cobi (talk · contribs) in fixing ClueBot III (talk · contribs)? It is taking too much when it archives threads, see User talk:ClueBot Commons/Archives/2017/April#Improper archival of heading, part 3. Basically, when archiving a level 4 subsection, the bot assumes that the subsection terminates with the next level 4 heading - and if there is an intervening level 3 heading, it is archiving that too, which is an error. --Redrose64 🌹 (talk) 20:36, 27 April 2017 (UTC)
- As it is, ClueBot III is behaving the way it was designed as it is configured. At some point when I have more time, I'll add the features which you desire. ClueBot III was not designed to handle multiple categories of discussion on the same page. Either the page needs to be reformatted (perhaps into transcluded sub-pages? Those work well with CB3), or you need to figure out a different archive strategy. -- Cobi(t|c|b) 23:37, 27 April 2017 (UTC)
Breaking change to API
- What
- The "Save" (aka "Publish") button is going to change at all the wikis.
- When
- Sometime soon-ish, probably later this month (i.e., during May 2017). The Persian Wikipedia has volunteered to be the first Wikipedia and will probably see this change later this week.
- Where can you get more information?
- See phab:T162849 and mw:OOjs UI.
- Why does this matter to bot owners?
- Apparently, this changes something about the API. It's the whole way the button is implemented, not just its appearance, that's changing.
- How can I test this?
- You can use the new system by adding
ooui=1
to the end of the URL: https://en.wikipedia.org/w/index.php?title=Marie_Curie&action=edit&ooui=1 I'm told that Twinkle will probably be okay (no problems in limited testing), but a few other scripts and bots may need to be updated.
If you think your bot or script will be broken by this, and you can't figure out how to update it, then please post requests for help either here or at WP:VPT. Whatamidoing (WMF) (talk) 17:18, 9 May 2017 (UTC)
- Whatamidoing (WMF), how is that going to affect AWB? Primefac (talk) 17:54, 9 May 2017 (UTC)
- From Fab it looks like a change only on front-end. As in, the actual API won't be affected. — HELLKNOWZ ▎TALK 17:58, 9 May 2017 (UTC)
- Yes, there shouldn't be any effect on bots due to this unless they're doing weird screen-scraping instead of using the API. User scripts and gadgets that affect the edit form will probably be affected, for example my ajaxpreview script needed a little work. Anomie⚔ 19:51, 9 May 2017 (UTC)
- I have a short but unfortunately all-too-accurate answer to all questions that are more technical than "approximately what color is that BIG Blue Button"?: I don't know.
- My recommendation is that you trust Anomie, and test your scripts. The plan is for this to go out to fawiki first (hasn't happened yet), and to make the rest of the plan after we learn whether it caused unexpected problems there. Whatamidoing (WMF) (talk) 15:38, 10 May 2017 (UTC)
- This isn't really the API we are talking about then. When I read this I thought, "not again". This looks like front-end stuff, not stuff the bots use. So this will more or less break a few JS scripts that rely on the edit window.—CYBERPOWER (Around) 22:04, 10 May 2017 (UTC)
We had a few more scripts break at fawiki than we had hoped for, so we're slowing this down a bit. I've collected some information, including diffs of some repairs, at mw:OOjs UI/Fixing scripts and gadgets. If you maintain any scripts, or if you depend upon any and aren't certain whether the owner is actively maintaining them, please check that page and start testing (instructions on the page).
Please also share this information with people at other wikis. Whatamidoing (WMF) (talk) 19:23, 17 May 2017 (UTC)
Approval for WPCleaner use...
In followup to something mentioned here. Wikipedia:Administrators'_noticeboard/Incidents#Unauthorized bot job, unresponsive operator would BRFA be the place to request a bot flag for a user account so that it CAN be run in an approved manner?
An additional concern expressed was that the BOT components of it were not compliant with the relevant policy on Cosmetic edits. ShakespeareFan00 (talk) 21:19, 23 May 2017 (UTC)
- Bots are more about what edits they will make then what tool they use - so yes BRFA is where to request a bot flag to run a dedicated account for bulk editing of repeatable edits. Note: the edits must be in compliance with applicable policies (i.e. you will most likely not get a bot approved to do nothing but make cosmetic edits). — xaosflux Talk 21:59, 23 May 2017 (UTC)
- That was based on the ANI disscussion what I understood as well, and means effectively that WPcleaner should not be used until it's fixed. ShakespeareFan00 (talk) 22:03, 23 May 2017 (UTC)
BAG nomination
Please note a nomination for Bot Approvals Group membership is active. Feel free to comment here. ~ Rob13Talk 22:46, 26 May 2017 (UTC)
- Please note, this nomination has been closed as successful. — xaosflux Talk 00:15, 3 June 2017 (UTC)
Inactive bots May 2017
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
The following bots, and their respective operator, have each had no contributions in over two years and are scheduled to be deauthorized in one week per the bot policy activity requirements. If your bot is listed and you wish to retain authorization, please add a note to the table and sign below.
bot_name | bot_editcount | bot_lastedit | oper_name | oper_lastedit | notes |
---|---|---|---|---|---|
User:SelketBot | 16870 | 20110624183928 | User:Selket | 20140216162053 | |
User:SkiersBot | 124334 | 20110715052412 | User:Skier_Dude | 20120917042322 | |
User:MartinBotIII | 136346 | 20110731122144 | User:Martinp23 | 20130427212553 | |
User:Kotbot | 157583 | 20110816121147 | User:Kotniski | 20120124000153 | |
User:WalkingSoulBot | 1 | 20110823130647 | User:WalkingSoul | 20110605220714 | |
User:GurchBot | 7421 | 20110919112313 | User:Gurch | 20130804182024 | |
User:MiszaBot | 81480 | 20111013170506 | User:Misza13 | 20150219094323 | |
User:DodoBot | 136137 | 20111126163905 | User:EdoDodo | 20111126164139 | |
User:RaptureBot | 13074 | 20111218221254 | User:FinalRapture | 20111120060515 | |
User:Rfambot | 1774 | 20120213174928 | User:Jennifer Rfm | 20131106230051 | |
User:FlBot | 14324 | 20120217110113 | User:Fl | 20140326014308 | |
User:MessageDeliveryBot | 10187 | 20120605022949 | User:EdoDodo | 20111126164139 | |
User:AlanBOT | 6712 | 20130429203141 | User:ikseevon | 20130429040405 | |
User:MMABot | 5265 | 20130505205805 | User:TreyGeek | 20130628122155 | |
User:LyricsBot | 27368 | 20130921052032 | User:Dcoetzee | 20141003225306 | Operator has been banned Accounts are already globally locked |
User:DyceBot | 45604 | 20140105070113 | User:Dycedarg | 20140315182843 | |
User:HersfoldArbClerkBot | 11398 | 20140110024813 | User:Hersfold | 20140110040539 | |
User:IPLRecordsUpdateBot | 19 | 20140210113220 | User:Jfd34 | 20140420092748 | |
User:Wpp research bot | 3 | 20140328200839 | User:Jantin | 20141222190945 | |
User:AstRoBot | 4229 | 20150125114428 | User:WDGraham | 20150214171645 | |
User:HBC AIV helperbot7 | 253005 | 20150204230319 | User:Wimt | 20150512214048 |
- Thank you, — xaosflux Talk 15:06, 29 May 2017 (UTC)
- Bot flags are being removed due to inactivity. Should these bots want to be reactivated in the future a new BRFA will be needed. — xaosflux Talk 15:39, 5 June 2017 (UTC)
Getting updates on finished tasks?
So I had an idea regarding bot tasks, specifically regarding followup. I've often wondered (even with my own tasks) about how many edits actually were made during a bot run, to see if there was any sort of accuracy regarding the initial estimate. Also, thinking about minor tweaks that were made to code to improve it.
Would it be reasonable to ask for bot operators to give some sort of "after action report" for one-off bot runs? Primefac (talk) 14:14, 8 June 2017 (UTC)
- I'd love to see them list our their tasks on their userpage, with details and notes - but getting people to do it consistently may be hard. — xaosflux Talk 17:59, 8 June 2017 (UTC)
- Oh! That's a good idea. Might update my own table. Primefac (talk) 18:21, 8 June 2017 (UTC)
Please comment there. Headbomb {t · c · p · b} 17:59, 22 June 2017 (UTC)
Redirects
I'm hearing that phab:T53736 is being discussed seriously, and that it may affect bots. I don't understand the project yet, but if you're interested in how bots cope with redirects, then please take a look. Whatamidoing (WMF) (talk) 16:19, 27 June 2017 (UTC)
- Note: This RFC has been scheduled for public discussion on June 28 (Wednesday) 21:00 UTC (2pm PDT, 23:00 CEST). As always, the discussion will take place in the IRC channel #wikimedia-office. Quiddity (WMF) (talk) 17:34, 27 June 2017 (UTC)
- The phab does not adequately explain what "change wikipage redirects to be proper HTTP redirects" means, i.e. what is proposed. I get that this is about making section links work better and not be javascript-dependent. The API just provides a means to read the contents of a page; it's up to the bot coder to decide whether to "follow redirects" or not. Following redirects is accomplished by reading the page, determining that it is a redirect, and then making another API call to read the contents of the page it redirects to. I don't think there is an API for just reading sections of a page. wbm1058 (talk) 13:53, 28 June 2017 (UTC)
- I believe this won't have any effect on bots using the API to access pages. The proposal is about the behavior of redirects being accessed via the web UI entry point (index.php). If your bot is using web UI entry points, you really should update it. Anomie⚔ 15:53, 28 June 2017 (UTC)
New discussion on "friendly" (human) wording of InternetArchiveBot et al.
There is a discussion happening at the Wikipedia:Village pump (proposals) page on the tone of the wording of the InternetArchiveBot messages that are being left on article talk pages. If you are interested, please see that discussion. Thanks! KDS4444 (talk) 23:43, 29 June 2017 (UTC)
Bot talkpage issue
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
User:InternetArchiveBot is a bot. Responding to an edit of it, I posted on their talkpage this. After "saving", my post did not show?! I had to do research, to discover that the (regular looking) page said like: "Please do not edit this page. Messages left here will likely go unnoticed.". In other words: the bot is deaf. (to be clear: talkpage instructions are not defining. For example: we have Redirects). Why is this bot allowed to operate like this? -DePiep (talk) 20:40, 28 June 2017 (UTC)
- I commented at VPT for the "technical" concern you had, as for the conceptual this is a better venue. — xaosflux Talk 20:52, 28 June 2017 (UTC)
- @DePiep: I don't see any real problem on that page, but do see several points to address:
- You edited over page protection to put that message there - not something that most editors will encounter. You should have seen an obvious indicator of this, along with the protection log explaining this page more.
- The bot's userpage, User:InternetArchiveBot, very clearly identifies the bot and it's tasks.
- The operator User:Cyberpower678 is clearly identified - did you contact him?
- That page had very clear directions about how to report bugs, and how to contact the operator on wiki. It is in effect a large non-standard soft redirect.
- Of course bots don't listen - they aren't generally programmed to - they generally don't talk back either.
- — xaosflux Talk 21:04, 28 June 2017 (UTC)
- C'mon. I go to a talkpage of a bot. That one should not be deaf. My comment is flushed by Saving. All right that talkpage even Promises in small print "I'll dump your contribution", but that's not OK. My point is, again: why is this acceptable bot talkpage behaviour? (todo: why even does it even require TE rights to post at all???). -DePiep (talk) 21:14, 28 June 2017 (UTC)
- re #2:
I don't see any real problem on that page
: then why reply here at all (duh)? - Even worse: why do you reply in 5 points to a thing you don't see or get? -DePiep (talk) 21:25, 28 June 2017 (UTC)
- Note: The
<div style="display:none">
appears to be why your post doesn't show. Of course, the placement of that may be disputable, as it appears intended to make new entries added at the bottom invisible (except when viewing the talk page source). —PaleoNeonate - 21:34, 28 June 2017 (UTC)- Not 'disputable', PaleoNeonate. Plain talkpage abuse. -DePiep (talk) 22:12, 28 June 2017 (UTC)
- Note: The
- I would suggest contacting the bot's operator - Cyberpower678, as the bot likely isn't programmed to respond to your queries on it's talkpage. SQLQuery me! 21:54, 28 June 2017 (UTC)
- Yes sir. That says it all: to contact the bot operator you must got to bot operators talkpage. So why does not User talk:InternetArchiveBot simply Redirect to that operators talkpage? ? ? (Instead of dumping my post down the drain) -DePiep (talk) 22:09, 28 June 2017 (UTC)
- Probably because contacting the botop is only one of the four potential reasons why you'd be wanting to fix something related to the bot. I find the FAQs on that talk page rather helpful for figuring out where to go. Primefac (talk) 22:18, 28 June 2017 (UTC)
- Missing the point again. When I go to a Talkpage, my post should be handled, not sunk. -DePiep (talk) 22:24, 28 June 2017 (UTC)
- Not really. If you go to a talk page that says "Don't click this link" you shouldn't click the link. Similarly, if the talk page says "Posts here will be ignored, try one of these other four options" you should probably try one of the other four options. As for "not showing" your message, I believe that's been answered above - the page is designed not to show messages. Primefac (talk) 22:29, 28 June 2017 (UTC)
- (ec) Bull shit. Talk pages are open for talking. If the owner wants something else: use a smart template or a redirect. But do not require that I read a talkpage manual. -DePiep (talk) 22:40, 28 June 2017 (UTC)
- I can totally understand why IABot has this set up - I use a similar one with TheMagikBOT with a redirect to my talk page. Managing 2 talks is unnecessarily complicated, and because you never sign into the bot account in the 'normal' way, messages can go unnoticed (not see on watchlist etc.). Using 1 talk page with notifications that I will definitly see makes it far easier for the bot op to communicate, which is good for everybody in the long run. Using a redirect is seemless, and in this case, I can't see how you can miss all the big notices and protection level without checking what the issue is and where to leave concerns. As long as where concerns/issues should be directed to is clear, there is no issue. TheMagikCow (T) (C) 08:51, 29 June 2017 (UTC)
- (ec) Bull shit. Talk pages are open for talking. If the owner wants something else: use a smart template or a redirect. But do not require that I read a talkpage manual. -DePiep (talk) 22:40, 28 June 2017 (UTC)
- Basically what Primefac said above. I don't see the issue, you were told where to go for bugs/feedback/questions, and you purposefully didn't do that. That's not on the bot op, that's on you. Headbomb {t · c · p · b} 22:31, 28 June 2017 (UTC)
- I was not 'told'. It is a regular talkpage. -DePiep (talk) 22:42, 28 June 2017 (UTC)
- I assume it omits redirecting in order to give information specifically to users who encountered the bot and not its operator. The same information at User talk:Cyberpower678 would distract posters with no interest in the bot. I suggest adding
__NONEWSECTIONLINK__
when posts are unwanted on the page. Users using the new section link can see their post in preview and then it vanishes on save. The top could also have a source comment saying "DON'T POST TO THIS PAGE. READ THE INSTRUCTIONS." Maybe repeat in the last section for users who try to edit that and manually add a section heading. Or add__NOEDITSECTION__
to prevent that. The unclosed<div style="display:none">
was apparently added deliberately.[1] That does seem a little extreme without an explanation. PrimeHunter (talk) 22:32, 28 June 2017 (UTC)- PrimeHunter, that's actually a pretty decent idea, were the page not TE protected. Maybe Cyberpower thought that a TE or admin would notice all of the various/subtle cues about not posting on the page. The other option, of course, would be for them to bump it to full admin protection, since they have that ability and probably only one person (*coughcough*) would object . Primefac (talk) 22:52, 28 June 2017 (UTC)
- Both magic words added. — xaosflux Talk 23:21, 28 June 2017 (UTC)
- Not really. If you go to a talk page that says "Don't click this link" you shouldn't click the link. Similarly, if the talk page says "Posts here will be ignored, try one of these other four options" you should probably try one of the other four options. As for "not showing" your message, I believe that's been answered above - the page is designed not to show messages. Primefac (talk) 22:29, 28 June 2017 (UTC)
- Missing the point again. When I go to a Talkpage, my post should be handled, not sunk. -DePiep (talk) 22:24, 28 June 2017 (UTC)
- Probably because contacting the botop is only one of the four potential reasons why you'd be wanting to fix something related to the bot. I find the FAQs on that talk page rather helpful for figuring out where to go. Primefac (talk) 22:18, 28 June 2017 (UTC)
- Yes sir. That says it all: to contact the bot operator you must got to bot operators talkpage. So why does not User talk:InternetArchiveBot simply Redirect to that operators talkpage? ? ? (Instead of dumping my post down the drain) -DePiep (talk) 22:09, 28 June 2017 (UTC)
Bot talkpage requiring TE permission
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
While we are at it: how and why can such a talkpage get TE-level of protection? -DePiep (talk) 22:45, 28 June 2017 (UTC)
- Probably to avoid non-template editors from posting on a page that isn't supposed to be posted on? Primefac (talk) 22:49, 28 June 2017 (UTC)
- I'm the admin who added the TE protection. It was an increase in level following a request to do so (using the same reason as previous) and TE was the chosen level because the botop was a TE and not a sysop at the time it was applied. Probably, full protection would be more appropriate at this stage. -- zzuuzz (talk) 23:00, 28 June 2017 (UTC)
- Yes, full protection would be most appropriate at this point. We occasionally use template editor protection as a hacky solution (e.g. the auto-substitution page for AnomieBOT), but that should be done as infrequently as possible. ~ Rob13Talk 23:25, 28 June 2017 (UTC)
- Thirded and Done. Primefac (talk) 23:29, 28 June 2017 (UTC)
- (ec)re are you both talking about the user:bot page or about the user talk:bot page? Do you know & apply the difference in these pages by my original question? -DePiep (talk) 23:34, 28 June 2017 (UTC)
- Your original question was already answered. The reason your post never displayed was because Cyberpower placed
<div style="display:none">
at the bottom of the page, meaning that all new sections would immediately disappear. Thank you for reminding me to extend full protection to the IABot's user page, though. Primefac (talk) 23:37, 28 June 2017 (UTC)- To be precise, the new sections don't disappear; they are not flushed, dumped, sunk or otherwise removed. They are present, but undisplayed. A
<div>
tag starts a page division. If there is no corresponding</div>
tag, the division ends at the bottom of the page. Any HTML element may be given astyle=
attribute, and thedisplay:none
declaration causes the element to not appear in the formatting structure. So if your browser has an "inspect element" feature which permits the toggling of styling, it is possible to make the thread in question displayed by deselecting thedisplay:none
declaration. As noted above, editing the page source also shows that it is present. --Redrose64 🌹 (talk) 23:53, 28 June 2017 (UTC)- To be more precise, Rr64: this WP:BON is a WP:ANI where all "established" ANI/BON-editors can go loose, without restraint, to protect their friends. Against other editors, sentiment only no reading or arguments needed. For example: by now the Talkpage is blocked (level Obama!). A talkpage!, just to kill my reasonable question. -DePiep (talk) 00:01, 29 June 2017 (UTC)
- This is a noticeboard for bot matters, you brought a bot matter here, you got your answer. There are no issues of non-compliance with WP:BOTCOMM or with WP:BOTACC, and were given clear directions on User talk:InternetArchiveBot on where to ask your questions about the bot. Headbomb {t · c · p · b} 00:16, 29 June 2017 (UTC)
- To be more precise, Rr64: this WP:BON is a WP:ANI where all "established" ANI/BON-editors can go loose, without restraint, to protect their friends. Against other editors, sentiment only no reading or arguments needed. For example: by now the Talkpage is blocked (level Obama!). A talkpage!, just to kill my reasonable question. -DePiep (talk) 00:01, 29 June 2017 (UTC)
- To be precise, the new sections don't disappear; they are not flushed, dumped, sunk or otherwise removed. They are present, but undisplayed. A
- To answer your secondary question (which has also already been answered) - there are four options given on the bot's talk page giving directions on what to do if you need to notify someone of the bot doing something it shouldn't. A bot can't think for itself, which is why there's no real reason to be able to post on its talk page. Primefac (talk) 23:39, 28 June 2017 (UTC)
- Your original question was already answered. The reason your post never displayed was because Cyberpower placed
- (ec)re are you both talking about the user:bot page or about the user talk:bot page? Do you know & apply the difference in these pages by my original question? -DePiep (talk) 23:34, 28 June 2017 (UTC)
- Just to be clear: what is the WP:BON final reply to my original question? -DePiep (talk) 00:03, 29 June 2017 (UTC)
- Either I'm missing a question (or two), or this reply should have done it. Did I miss part of the question you asked, and if so, would you mind re-asking it? Primefac (talk) 00:06, 29 June 2017 (UTC)
- You just linked to one of dozens of replies, so why do you claim that is the answer? (BTW it is not, for example it says: "Your original question was already answered...".). Do you feel proud? What a load of BS you PON homies have to kill any decent argument. You all are promoted to level WP:ANI! Your result: a bot talkpage is protercted (more heavily to keep edoitors like me out!), and none of you have a fair talk to explain it. -DePiep (talk) 00:24, 29 June 2017 (UTC)
- OK, I tried giving you the benefit of the doubt here, but you don't seem interested in listening. So, for the last time the talk page was protected because that's not where the bot operator wants you to leave messages. You were told where to ask your question: at User talk:Cyberpower678. This is prominently displayed and explained on User talk:InternetArchiveBot, which is fully compliant with our WP:BOTACC and WP:BOTCOMM policies. What part of your question hasn't been answered? Why are you still here badgering people who try to help you, rather than ask your question about User:InternetArchiveBot at User talk:Cyberpower678? Headbomb {t · c · p · b} 00:34, 29 June 2017 (UTC)
Okay, let's try this again
- DePiep, clearly we have missed something fundamental in translation between your original request and what this thread has become. Let's just forget all of that above. What is your question edit: assuming it hasn't been answered by Headbomb above? Primefac (talk) 00:28, 29 June 2017 (UTC)
You all are promoted to level WP:ANI.
I am not; I am but an assistant pig-keeper . —PaleoNeonate - 00:50, 29 June 2017 (UTC)
- re Primefac: the question has been answered, implicitly: by changing the talkpage setup without admitting something was wrong. -DePiep (talk) 10:55, 29 June 2017 (UTC)
I believe that AWB bots automatically stop if a message is left on the talk page. This is good practice and it probably wouldn't be a good idea to fully protect these talk pages. I see that User:InternetArchiveBot has a shut-off option so this is not a concern in this case. — Martin (MSGJ · talk) 11:30, 29 June 2017 (UTC)
- @MSGJ: yes, that is standard for bots running AWB, but not for most custom frameworks such as the one used by this bot. — xaosflux Talk 11:34, 29 June 2017 (UTC)
The real question, aside from the drama above, seems to be: should bot talk pages be allowed to redirect to other pages, or should bot communication be on the bot talk page only? Personally, I'm strongly opposed to this (as above) but welcome constructive discussion to clear the matter up formally. TheMagikCow (T) (C) 12:26, 29 June 2017 (UTC)
- In my mind, it all boils down to "yes, they should be allowed but not forced." For a bot like PrimeBOT, the talkpage-as-redirect serves as the "shut down" function and it would be a pain in my arse to have to re-log-in every time someone left a thank-you at the bot's talk page. For a bot like IABot or AnomieBOT, the shutdown is an external process and so the talk page commenting doesn't do anything to affect the processes; that doesn't mean that people have to comment there. Bot accounts are essentially alt accounts, and as far as I can tell no one is forcing every alt account to have their own talk page (i.e. many people with alts have the talk page redirect to their main). Primefac (talk) 12:38, 29 June 2017 (UTC)
- Hmm. I'm not sure about this interpretation of "the real question". I have read the above, and it sounds to me like one could with equal accuracy summarize the discussion as "Do editors have a right to have their comments displayed on a bot's talk page, even if that talk page says not to post there?" Which is quite different, and so we probably shouldn't guess.
- But another question seems more salient to me: User:DePiep, did you actually see/read/understand the message that said not to post there, before you saved the page? That little coffee-roll-colored box looks like exactly the thing that would produce banner blindness in experienced editors, through no fault of your own. WhatamIdoing (talk) 20:09, 29 June 2017 (UTC)
- If we go down that logic - what's the difference between "I post on a bot page and it disappears" vs "I post on a bot page and the owner immediately removes it"? If a user wants their bot talk page permanently protected, because they don't want other editors posting there, they are allowed (at least in my mind) to use any means they want to ensure that doesn't happen. We're flinging around silly hypothetical situations, making wild accusations (both real and fantasy), and really I don't see us reaching any sort of "conclusion" without either a straight question-and-answer or an RFC (i.e. question and discuss). Primefac (talk) 21:35, 29 June 2017 (UTC)
- "The owner immediately removes it" doesn't make the poster feel like his comment will never be read, or that the editing software is broken. Perhaps a more clearer example would be "I post on a bot page, and the bot reverts me with an edit summary of 'Do not post on this page. Please read the directions.'". WhatamIdoing (talk) 05:32, 30 June 2017 (UTC)
- If we go down that logic - what's the difference between "I post on a bot page and it disappears" vs "I post on a bot page and the owner immediately removes it"? If a user wants their bot talk page permanently protected, because they don't want other editors posting there, they are allowed (at least in my mind) to use any means they want to ensure that doesn't happen. We're flinging around silly hypothetical situations, making wild accusations (both real and fantasy), and really I don't see us reaching any sort of "conclusion" without either a straight question-and-answer or an RFC (i.e. question and discuss). Primefac (talk) 21:35, 29 June 2017 (UTC)
Speaking as a bot operator, I tried giving blatant warnings on the bot's talk page that the bot won't respond on that talk page and all queries are routed to the bit recycling bin. Even then that didn't prevent editors from trying to engage the bot as a user. I filed for Full Page protection (requiring an admin to be able to edit) and still that doesn't prevent industrious admins from dropping notices when they'll be ignored. There's no reason good reason (IMO) why IAB uses that unclosed div tag, and has the side effect of potentially gobbling up other data. Highly reccomend (with what I see as the application of Full protection) to remove that unclosed tag. Hasteur (talk) 13:53, 29 June 2017 (UTC)
- I closed that div tag referencing this discussions. With both the magic words, and the full protection if an admin really wants to override all that and put a message there that the directions already say will be ignored then it won't matter much anyway. It may help if such admins develop a taste for fish though. — xaosflux Talk 14:01, 29 June 2017 (UTC)
- TL;DR: You all, all keep blaming me for 'not understanding' the talkpage header, while meanwhile clearly feeling the need to change that page's settings [2]. (As in: "I don't smell *goes for a shower*"). Why did not any WP:BON habitual admit I do have a point? -DePiep (talk) 00:48, 30 June 2017 (UTC)
- re WhatamIdoing
did you actually see/read/understand the message that said not to post there .. before saving
. This is a good Q, but not enough. My A: - Of course I did see it. But why should I grasp that it says: " Ceci n'est pas une Talkpage"? Why even can a talkpage self-declare that? Please someone explain why this talkpage opening would wikilegally enforce me to understand the self-contradicting message?
- The main line is: never you can say "Do not talk on this talkpage". Full stop. And then, when I do use that talkpage to talk, blame ME for talking?! Also, each and every editor should know that per WP:TALK: never manipulate someone else's talk (includes hiding, and [3]).
- Counter-question: If all this is MY fault & misunderstanding, why did you all scramble to edit that talkpage (into blocking Talk at every crippled cost) [4]?
- Then, in a following stage, here at WP:BON (my first visit, AFAIK) not a single habitual editor came forward to say: "DePiep does have a point". Not one of you all. Everyone kept repeating that I was at fault for not reading a header "ceci n'est pas une Talkpage". Even on my own talkpage editors came to 'explain' my error, then with a support !vote afterwards [5]. Otherwise respectful editors rushed to asap close resolved? [6] [7]. This is why I referenced to WP:ANI: when I make a (minor) critical point, you all circled the wagons (one of us is attacked! Fire outward!), and not one of you kept cool. -DePiep (talk) 00:48, 30 June 2017 (UTC)
- Never you can say "Do not talk on this talkpage". You most definitely can, we even mention this possibility in the bot policy WP:BOTCOMM in the form of redirects. IABot's talk page is effectively a soft redirect, giving you four options depending on what you want to do. There is no 'penalty' for not following talk page directions, but if you ignore the bot op's directions for contacting them, you can't expect a prompt reply from them. never manipulate someone else's talk WP:TALK does not say that, otherwise one couldn't revert vandalism or other inappropriate posts made by error. why did you all scramble to edit that talkpage To prevent further misunderstandings / posts made at the wrong location.
- This discussion is rather unproductive, so unless you actually have something to say that is of relevance to the operation of bots, I'll point you to the RFC on IABot. Headbomb {t · c · p · b} 01:11, 30 June 2017 (UTC)
- Why should you grasp this? Well, like most experienced Wikipedians, you're usually pretty smart about following directions, so I'd have assumed that if you saw and understood "Don't post here: For A, go there; for B, go to this other page; for everything else, go to this third page", then you'd have just done that (assuming that you wanted a timely response, etc.). It is not really that different from the instructions that we spam on all sorts of pages, beginning with Talk:Main Page – in fact, the only substantive difference is that when you reach the end of this list, there's nothing left that should be posted to that particular talk page. Talk:Main Page provides you with 17 different places to post questions (I just counted), but leaves one topic of discussion that should be posted to that talk page.
- I'm feeling like you were really surprised when the comment (the one you left after reading the directions that told you not to do that...) was not displayed to you. WhatamIdoing (talk) 05:32, 30 June 2017 (UTC)
- re WhatamIdoing
This is so far out of hand. @Cyberpower678: In order to get this discussion over with and move on, would you be fine with adding a literal soft redirect to the bot user talk page to make it even more obvious that the editor is intended to comment on your talk page? This is not indicative of any wrongdoing on your part, just the path of least resistance (if you're interested in taking it). ~ Rob13Talk 01:41, 30 June 2017 (UTC)
- Nope. What’s the soft redirect template name?—CYBERPOWER (Message) 02:05, 30 June 2017 (UTC)
- @Cyberpower678: not sure how it will really look there, but it is {{soft redirect}}. — xaosflux Talk 02:10, 30 June 2017 (UTC)
- Speaking only for myself, I'd find this more confusing than anything else. "Comment at X" at the top, followed by "WAIT, if you want to comment somewhere, comment at A for this, B for that, C for this, and D for that." {{Soft redirect}} should only be used for directing people to other sites/other projects. Headbomb {t · c · p · b} 03:15, 30 June 2017 (UTC)
- Perhaps that's right. Bah. Well, can't please everyone. I think the clear consensus here is that this talk page is wholly appropriate based on our bot policy and talk page guidelines. ~ Rob13Talk 05:05, 30 June 2017 (UTC)
- Speaking only for myself, I'd find this more confusing than anything else. "Comment at X" at the top, followed by "WAIT, if you want to comment somewhere, comment at A for this, B for that, C for this, and D for that." {{Soft redirect}} should only be used for directing people to other sites/other projects. Headbomb {t · c · p · b} 03:15, 30 June 2017 (UTC)
- @Cyberpower678: not sure how it will really look there, but it is {{soft redirect}}. — xaosflux Talk 02:10, 30 June 2017 (UTC)
- WhatamIdoing, Headbomb, Xaosflux, PaleoNeonate, MSGJ, Primefac, Rob13: whatever you like to claim. The horror. -DePiep (talk) 00:42, 1 July 2017 (UTC)
Cydebot revisiting a page multipl times
Check edit history here (2013) Maybe we encourage the bot op to wrap up changes? I am sure there are more recent examples. -- Magioladitis (talk) 22:17, 3 July 2017 (UTC)
- Probably not possible without a full rewrite, and the bot operator is no longer active. That is an extreme edge case. I agree it would be ideal to wrap together changes, just maybe not feasible. Long-term, we probably want a replacement for Cydebot with a more active botop. ~ Rob13Talk 22:51, 3 July 2017 (UTC)
- I'm not really concerned with chasing down a 4 year old issue; is it currently occurring? — xaosflux Talk 23:22, 3 July 2017 (UTC)
- Funny, that article shows a human editor making multiple cat changes in 2 edits recently though. — xaosflux Talk 23:23, 3 July 2017 (UTC)
- Xaosflux yes, it's still happening. It also creates duplicated categories (CHECKWIKI error 17). This is how I spot these. -- Magioladitis (talk) 23:34, 3 July 2017 (UTC)
- Do you have some recent diffs? We can certainly review if it is a problem or not, but need some recent data. — xaosflux Talk 00:24, 4 July 2017 (UTC)
- Xaosflux yes, it's still happening. It also creates duplicated categories (CHECKWIKI error 17). This is how I spot these. -- Magioladitis (talk) 23:34, 3 July 2017 (UTC)
- Funny, that article shows a human editor making multiple cat changes in 2 edits recently though. — xaosflux Talk 23:23, 3 July 2017 (UTC)
- I'm not really concerned with chasing down a 4 year old issue; is it currently occurring? — xaosflux Talk 23:22, 3 July 2017 (UTC)
I have proposed to do what Cydebot does with Yobot. At least in the level of closed XfDs. -- Magioladitis (talk) 23:35, 3 July 2017 (UTC)
- Would you handle it with AWB or with a dedicated script? We'd need a replacement that works as Cydebot does at WP:CFD/W to be a workable solution. @Xaosflux: I have no diffs, but I've worked around CfD enough to figure out how Cydebot works. It processes the categories placed at WP:CFD/W one-at-a-time from top to bottom. So if multiple categories are put up at CFD/W at the same time and are on the same page, it will do one edit for each category being removed or renamed. This isn't ideal, though I will note it's exceedingly rare. Combine this with Cyde's general inactivity lately and it would be a good idea to look for a new bot operator. Definitely not worth blocking the bot in the short term, which would destroy CfD's workflow until replaced. ~ Rob13Talk 01:03, 4 July 2017 (UTC)
- This isn't necessarily a "bad thing" - do one thing, and do it well goes a long way. If the removal isn't part of WP:CFDS then linking to the specific CFD may have use. — xaosflux Talk 01:20, 4 July 2017 (UTC)
- I could use a dedicated script inside AWB. Right now AWB does not allow multiple category renames. -- Magioladitis (talk) 11:25, 4 July 2017 (UTC)
- That likely wouldn't work, at least not without significant loss of functionality. Correct me if I'm wrong, but you'd have to manually start that. It couldn't just run and read things off WP:CFD/W? ~ Rob13Talk 13:14, 4 July 2017 (UTC)
- You are correct, everything would have to be manually loaded/run. I don't know if that would necessarily mean a lack of functionality specifically (depending on the code being run) but it would certainly impact the efficiency and speed at which the changes are being implemented. Primefac (talk) 15:25, 4 July 2017 (UTC)
- It depends. If you change the way the pages are shown in a format similar to WP:AWB/TR we could even do it automatically. -- Magioladitis (talk) 17:37, 4 July 2017 (UTC)
- Hm... Is speed important when fixing things? -- Magioladitis (talk) 17:39, 4 July 2017 (UTC)
- Maybe I misspoke. I don't think it's necessarily an issue of "speed" but "timing" - a bot that runs automatically like AnomieBOT (for things like automatic template substitution or TFD filing) will only be lagging a change by an hour or two, whereas a bot that runs "once a day" or similar could in theory start almost a full day later. If we're talking about efficiently removing categories from pages, it makes more sense to have it set up like the former. Primefac (talk) 17:54, 4 July 2017 (UTC)
- You are correct, everything would have to be manually loaded/run. I don't know if that would necessarily mean a lack of functionality specifically (depending on the code being run) but it would certainly impact the efficiency and speed at which the changes are being implemented. Primefac (talk) 15:25, 4 July 2017 (UTC)
- That likely wouldn't work, at least not without significant loss of functionality. Correct me if I'm wrong, but you'd have to manually start that. It couldn't just run and read things off WP:CFD/W? ~ Rob13Talk 13:14, 4 July 2017 (UTC)
- I could use a dedicated script inside AWB. Right now AWB does not allow multiple category renames. -- Magioladitis (talk) 11:25, 4 July 2017 (UTC)
- This isn't necessarily a "bad thing" - do one thing, and do it well goes a long way. If the removal isn't part of WP:CFDS then linking to the specific CFD may have use. — xaosflux Talk 01:20, 4 July 2017 (UTC)
Speed is kind of important for this one too. We want to minimize the time when articles are split between two categories when a category is being renamed. I wouldn't want a bot renaming a 500-article category making an edit every 10 seconds. AWB can do things quickly, so that's not a problem. The timing as Primefac states is kind of important to keep CFD running smoothly. Whenever Cydebot has been down for even a couple days, it's a huge pain to everyone trying to manage that process. ~ Rob13Talk 18:05, 4 July 2017 (UTC)
Another API thing
Krinkle posted at message to several mailing lists that some of you may want to read. It begins like this:
TL;DR: In April 2017, the jQuery library in MediaWiki was upgraded from 1.x to 3.x (the current stable version), with the jQuery Migrate plugin enabled to ease migration. We temporarily still load jQuery 1.x on Wikimedia wikis. Read about the API changes at https://jquery.com/upgrade-guide/3.0/
The full message can be read here: https://lists.wikimedia.org/pipermail/wikitech-ambassadors/2017-June/001617.html
Whatamidoing (WMF) (talk) 19:53, 5 July 2017 (UTC)
- That doesn't sound like the API that pertains to us, unless I'm missing something. jQuery is a JS library. JS run in browsers. If I'm missing something, please correct to me.—CYBERPOWER (Message) 00:17, 6 July 2017 (UTC)
- You're not missing anything. This change will affect gadgets and user scripts, but probably won't affect any bots. Anomie⚔ 11:59, 6 July 2017 (UTC)
Concern about Magic links bot
Unproductive complaining about a problem that doesn't exist. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 14:44, 6 July 2017 (UTC)
- The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
(moved from WT:BAG#Is this the right forum to bot operators unpermissioned?)
Is this the right forum to bot operators unpermissioned? If not, could you redirect me? Thanks. --Hobbes Goodyear (talk) 20:59, 2 July 2017 (UTC)
- @Hobbes Goodyear: I'm not exactly what you are asking, this is about the Bot Approvals Group - for more general bot/operator discussions please see Wikipedia:Bots/Noticeboard. — xaosflux Talk 21:26, 2 July 2017 (UTC)
- @Xaosflux: Thanks, that's helpful. --Hobbes Goodyear (talk) 02:18, 6 July 2017 (UTC)
- @Xaosflux: Hobbes is concerned about the edit summary used by Magic links bot, claiming it isn't clear what it's doing. Everyone else (see here) seems to think the edit summary is fine. Hobbes is looking for someone to remove the permission granted by BAG to operate the bot because he doesn't like the edit summary. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 21:32, 2 July 2017 (UTC)
- Nihonjoe (talk · contribs)Please do not make requests on my behalf, especially when they misrepresent me, such as in this claim: "Hobbes is looking for someone to remove the permission granted by BAG to operate the bot because he doesn't like the edit summary". --Hobbes Goodyear (talk) 02:25, 6 July 2017 (UTC)
- Nihonjoe, I was thinking that might be the case, but xoasflux is right that BOTN is the proper location for that request. Primefac (talk) 21:36, 2 July 2017 (UTC)
- Thanks! ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 21:45, 2 July 2017 (UTC)
- (edit conflict) @Nihonjoe: thanks for the note, BON is the place to ask for bot authorization reviews if needed - pretty drastic for an edit summary - but that is the venue if really warranted and at an impasse with the operator. — xaosflux Talk 21:38, 2 July 2017 (UTC)
- @Xaosflux: In this case, it seems to be more of an impasse with the concerned person as everyone else in the discussion (including me, so perhaps I'm biased) seems to think the edit summary is acceptable. Thanks! ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 21:45, 2 July 2017 (UTC)
- JYI, There was one more person who complained(?) about the task in different place. -- Magioladitis (talk) 23:12, 2 July 2017 (UTC)
- @Xaosflux: In this case, it seems to be more of an impasse with the concerned person as everyone else in the discussion (including me, so perhaps I'm biased) seems to think the edit summary is acceptable. Thanks! ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 21:45, 2 July 2017 (UTC)
@JJMC89, Hobbes Goodyear, Magioladitis, Xaosflux, Primefac, BU Rob13, SQL, SkyWarrior, and Cyberpower678: Pinging those involved in previous discussions of which I am aware. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 03:37, 3 July 2017 (UTC)
- Since the ping doesn't seem to have worked for everyone, I posted a note on everyone's talk page (except Magioladitis, since he already commented here). ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 16:13, 3 July 2017 (UTC)
- @Smeat75: too. They raised concerns too. -- Magioladitis (talk) 09:31, 3 July 2017 (UTC)
- @Magioladitis: Will you link to where they expressed their concern, just for the record here? Thanks in advance! ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 19:05, 3 July 2017 (UTC)
- @Nihonjoe: I pinged the editor already. I received many complains myself for trying to do the same tas from my main account. A part of the complains had to do with the task itself. -- Magioladitis (talk) 20:46, 3 July 2017 (UTC)
- @Nihonjoe: The discussion is in Wikipedia:Administrators'_noticeboard/Incidents#User:Magioladitis_high_speed_editing where Smeat75 writes "My watchlist has been filled with edit notifications with the message "Replace magic links with templates per local RfC and MediaWiki Rfc" and I have no idea what this even means but I have to check every single article to see if something important was changed to the article before that which I have missed. It has been very irritating.". Judging by the edit summary they refer to the Magic links bot and not to my edits. -- Magioladitis (talk) 20:49, 3 July 2017 (UTC)
- Comment from passer-by I understand frustration with the large number of bot edits that are inflicted on watchlists by various bots, but User:Magic links bot is doing necessary work, and its volunteer operator deserves our thanks for the large amount of development work needed. The edit summary is good (example) because it is short yet exactly describes the edit, with links for whoever is curious about what a magic link is, and links to the RfCs. I do not see any attempt to offer a better alternative, and I would oppose adding more verbage to offer a half-baked explanation of what a magic link is and the fact that they will stop working in due course. Some bot edit summaries might benefit from a link to a bot subpage that describes the details, so the summary would be something like "replace magic links, see explanation" where the last word is a link to the subpage. However, that is not needed in this case. Johnuniq (talk) 04:37, 3 July 2017 (UTC)
- I didn't get a ping, I just happen to watch this page. But I'm not really all that involved in this matter. I simply protected that redirect page to avoid further misunderstanding. I'll watch and leave a comment here saying if people are complaining about bots clogging the watchlist, then set your filters to remove them. I have, and it works just fine for me.—CYBERPOWER (Chat) 11:57, 3 July 2017 (UTC)
- As I, and several others, have stated, the edit summary for the bot is fine. It tells exactly what the bot is doing without going into excessive detail (that's what the links to the RfCs are for). The fact that Hobbes Goodyear wants to prevent someone from operating bots altogether because of a single edit summary (which the majority of users agree is perfectly fine) is laughable at best. SkyWarrior 16:42, 3 July 2017 (UTC)
- It's perfectly fine. It's not too long and has multiple links for people who like reading things. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 17:03, 3 July 2017 (UTC) - Magic links bot's edit summaries (e.g. here and here) seem perfectly reasonable to me. They describe exactly what is occurring, and link to the definitions and decisions that are driving the edits. These edits are properly being tag as
minor
andbot
so as to avoid unnecessary recent changes and watchlist impact. Additionaly, no edits unrelated to the specific task are being made without explanation. Finally, the bot's user page clearly defines the task with examples. — xaosflux Talk 17:33, 3 July 2017 (UTC) - I would still be interested in any ways that our magic links bots could be improved - especially with the edit summaries. As such, I have asked 3 times so far for an example of how the summary could be improved: [8] [9] [10]. SQLQuery me! 03:01, 4 July 2017 (UTC)
- @SQL:I don't think that you are. I feel as though I am speaking to brick wall here. I already told you what I think. If you really want an answer, and are not just being combative, please give it a shot yourself, and let me know. Please assume that I am am foolish and lazy enough to want an edit summary that can be understood from the summary itself, without having to scroll through various links purporting to explain the issue. Please give it honest go, or stop repeating yourself.--Hobbes Goodyear (talk)
- If you read any of the BRFA's, you'd know that the edit summary was something I made it a point to spend time with the bot operators to make clearer. This was the best we could come up with - hence why I'm asking you for a fifth time now for an example showing how we could do it better. SQLQuery me! 04:06, 6 July 2017 (UTC)
- Please get off your BRFA high-horse, whatever a "BRFA" is. Try again, per my suggestion, or please go away, as you are not convincing me, or apparently even listening to me, and you are otherwise preaching to your (rather small) choir of bot buddies. If you refuse to engage me, please do not waste people's time responding. Here's what I suggest: just say "I can't get through to this guy, no point continuing." You get the last word, I go away, and everybody's happy--except, of course, regular editors who continue to get spammed with inscrutable bot edits which, if they only took the time, they would discover are the the result of the sniveling capitulation by certain EN editors to the fiat of some pimply-faced WMF coder who thinks magic words are icky-poo. --Hobbes Goodyear (talk) 06:09, 6 July 2017 (UTC)
- Hobbes Goodyear, if I might try to help - a BRFA is a "Bot Request for Approval", i.e. the process by which an operator requests permission to use their bot for a task. In the Magic links bot BRFA (see the very last thread under "Trial Complete 100 edits") you'll see a discussion regarding how the edit summary should read (there was also a discussion at my own bot regarding edit summaries). Between both bots, and a few other discussions, we worked out how to best display the summary so that it gave enough information. This is the reason why SQL keeps asking for your opinion on how to word it - we've already had the discussion a few times and (in good faith) confident that we do have the best wording. I don't think any of us are on a high horse, and if you have something better I at least would be happy to consider it. Throwing shade and incivility, though, is just a good way to get everyone else to ignore you. If you feel this environment is too hostile, though, I'm willing to discuss it elsewhere, such as my own talk page. Primefac (talk) 13:17, 6 July 2017 (UTC)
- Please get off your BRFA high-horse, whatever a "BRFA" is. Try again, per my suggestion, or please go away, as you are not convincing me, or apparently even listening to me, and you are otherwise preaching to your (rather small) choir of bot buddies. If you refuse to engage me, please do not waste people's time responding. Here's what I suggest: just say "I can't get through to this guy, no point continuing." You get the last word, I go away, and everybody's happy--except, of course, regular editors who continue to get spammed with inscrutable bot edits which, if they only took the time, they would discover are the the result of the sniveling capitulation by certain EN editors to the fiat of some pimply-faced WMF coder who thinks magic words are icky-poo. --Hobbes Goodyear (talk) 06:09, 6 July 2017 (UTC)
- If you read any of the BRFA's, you'd know that the edit summary was something I made it a point to spend time with the bot operators to make clearer. This was the best we could come up with - hence why I'm asking you for a fifth time now for an example showing how we could do it better. SQLQuery me! 04:06, 6 July 2017 (UTC)
- OK @Hobbes Goodyear: ball is in your court. Please clearly and specifically explain both what you see is the problem, and what you think a solution would be regarding this edit summary. If no response, this can be closed out as a non-issue. — xaosflux Talk 03:48, 4 July 2017 (UTC)
- @Xaosflux: Hobbes Goodyear has not edited since July 2. No idea where Hobbes is located, but yesterday was Independence Day in the United States, and some people take time off around then. I suggest we allow at least another week (until about this time next Wednesday, July 12) before closing if there is no response. That should be enough time even if Hobbes was taking a vacation. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 17:25, 5 July 2017 (UTC)
- Sounds fine to me. — xaosflux Talk 17:44, 5 July 2017 (UTC)
- @Xaosflux: Hobbes Goodyear has not edited since July 2. No idea where Hobbes is located, but yesterday was Independence Day in the United States, and some people take time off around then. I suggest we allow at least another week (until about this time next Wednesday, July 12) before closing if there is no response. That should be enough time even if Hobbes was taking a vacation. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 17:25, 5 July 2017 (UTC)
- I asked a question, and you answered it. The rest is Nihonjoe's creation. I think that further discussion here is a waste of everyone's time, don't you? --Hobbes Goodyear (talk) 03:01, 6 July 2017 (UTC)
- <snort> You're a real piece of work, Hobbes Goodyear. This was all done to try to satisfy your claim that the edit summary was unacceptable. Pretty much everyone who participated in all the various discussions disagreed with you, so now you're trying to make it look like you don't care. @Xaosflux: We can close this. It's obvious Hobbes only wanted the attention and didn't actually have a problem with the edit summary. He just wanted to waste everyone's time, apparently. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 14:38, 6 July 2017 (UTC)
- I asked a question, and you answered it. The rest is Nihonjoe's creation. I think that further discussion here is a waste of everyone's time, don't you? --Hobbes Goodyear (talk) 03:01, 6 July 2017 (UTC)
- The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
HTTP→HTTPS
Is it OK that the bot edits a page multiple times to change one by one HTTP→HTTPS to some external links ? It's page history cluttering. Your opinions? --XXN, 17:00, 3 July 2017 (UTC)
- @XXN: history clutter isn't a "primary" factor for bots, but there could be an issue. Please provide the name of the bot, where you have asked the operator about it already (or at least invite them to this discussion), and some pages and diff's as examples of your concern. — xaosflux Talk 17:26, 3 July 2017 (UTC)
- Are there cases where someone edited in between the bot edits, adding a new external link that needed to be changed? The bot (Bender the Bot) I've seen doing this seems to be doing things just fine. I'll admit I haven't paid attention to see if it's editing pages multiple times in situations other than the one I suggested. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 19:08, 3 July 2017 (UTC)
- A recent case which speaks for itself (behaviour discovered only now).
- Previously, thanks to my watchlist I've noticed only the fact that this bot edits the same page multiple times under this task, replacing HTTP→HTTPS for individual websites one by one. IMO, initially there should be created a list of websites for which to do such replacements and the bot should do all replacements in one edit per page. XXN, 20:24, 3 July 2017 (UTC)
- @XXN: Let's assume the best of intentions in BtB and their operator's behavior. It could be that the bot's targeting system was based off a database extract of page content that did not have both links in it. I notice that the first change does it for the af.reuters.com subdomain, whereas the second does it for the fr.reuters.com subdomain. I note that the changes before and after the second change are also for the fr.reuters.com subdomain. Going to take a leap and assume that the bot is configured to look for specific subdomains in the case of reuters as not all are converted over to the HTTPS version. Also, before raising this at a noticeboard, why didn't you raise this with bender235 as I don't see any talkpage discussion about this, which is expected for working with other wikipedians. Hasteur (talk) 20:55, 3 July 2017 (UTC)
- Operator notification of this discussion has been left. Bender235 is normally quite responsive to feedback for improvements. — xaosflux Talk 21:03, 3 July 2017 (UTC)
- I have implied in the past that this was happening but I did not want to report it. Glad that this is considered as something that needs to be handled. Still, I thing we need a pages that will say that we need to encourage devs to do more of what they do in a single run. -- Magioladitis (talk) 21:20, 3 July 2017 (UTC)
- Operator notification of this discussion has been left. Bender235 is normally quite responsive to feedback for improvements. — xaosflux Talk 21:03, 3 July 2017 (UTC)
- @XXN: Let's assume the best of intentions in BtB and their operator's behavior. It could be that the bot's targeting system was based off a database extract of page content that did not have both links in it. I notice that the first change does it for the af.reuters.com subdomain, whereas the second does it for the fr.reuters.com subdomain. I note that the changes before and after the second change are also for the fr.reuters.com subdomain. Going to take a leap and assume that the bot is configured to look for specific subdomains in the case of reuters as not all are converted over to the HTTPS version. Also, before raising this at a noticeboard, why didn't you raise this with bender235 as I don't see any talkpage discussion about this, which is expected for working with other wikipedians. Hasteur (talk) 20:55, 3 July 2017 (UTC)
- I'm the bot operator, and I thought converting one domain at a time was how I should run this HTTP→HTTPS conversion, so it can be more easily reverted if something went wrong (links don't work all of sudden etc.). Usually I convert all subdomains of a particular domain, the Reuters issue was an exception because I wasn't aware that these domains exist.
- Anyhow, should I change the modus operandi of this bot to convert 5-6 domains at a time? --bender235 (talk) 04:51, 4 July 2017 (UTC)
- @Bender235: Primarily - do what you are comfortable with. That being said, if you have already shown that a particular domain conversion is successful (a sufficient number of unreverted updates over a reasonable amount of time) adding them to a "master convert list" for other edits may be a good approach. Keeping the edit summaries useful may require some adjustment, and instead of linking to multiple BRFA you could link to a section on your bot's userpage that groups them. If you want to make this type of change, you do not need a new BRFA. As most of your runs are listed as "one-time run" - do you anticipate actually needing to make these edits again though? — xaosflux Talk 12:10, 4 July 2017 (UTC)
- Most of the domains I converted are HTTPS-by-default as of today, so newly added links should always be HTTPS. I don't think there will be need for a re-run. --bender235 (talk) 12:51, 4 July 2017 (UTC)
- Maybe it's a bit too late to talk about this, but personally I think that all such replacements in a page should be done in a single edit, per multiple reasons, including: to not clutter the page history unnecessarilly, to not mask some vandalism or uncostructive edits between two or more too frequent bot edits, or to not disturb unnecessarilly users watching a page with repeated similar edits again and again (don't recommend now to disable showing bot edits in watchlist). XXN, 11:08, 7 July 2017 (UTC)
- Most of the domains I converted are HTTPS-by-default as of today, so newly added links should always be HTTPS. I don't think there will be need for a re-run. --bender235 (talk) 12:51, 4 July 2017 (UTC)
Blocking inappropriate use of a bot
User:Kandymotownie recently made sweeping disruptive edits to high-profile pages such as Barack Obama and Donald Trump, via IABot, adding archive links for hundreds of sources which are still live, so this only creates useless bloat in wikitext. I reverted those but there is similar bot-assisted damage to other pages, mainly about Ghana. This user's talk page is full of warnings that s/he never responds to over several years, indicating a WP:NOTHERE attitude. How was s/he ever authorized to run a bot? In view of the recent disruptive actions, bot credentials should be revoked immediately, and perhaps a stern warning or short block is in order. — JFG talk 05:33, 7 July 2017 (UTC)
- @JFG: These edits do not appear to have been made by a bot, but directly by an editor. As such no "bots" are involved here. They do appear to be using a service to help them build their edits, but the actual edit commitments were done manually. There is no "bot credential" involved here to review or revoke. I see you left the editor a talk page message - if this needs further review WP:ANI may be the best venue if there is actually continuing disruption. — xaosflux Talk 10:46, 7 July 2017 (UTC)
- Thanks for the clarification, Xaosflux; indeed it wasn't clear to me that IABot is manually activated and available to anyone. Will just monitor this user account for a while… — JFG talk 12:23, 7 July 2017 (UTC)
- JFG, please refer to Wikipedia:Link rot, where this practice is explicitly encouraged. – Jonesey95 (talk) 14:59, 7 July 2017 (UTC)
- @Jonesey95: Encouraged to fix dead links, yes; to add blanket archives to all live sources, no. That's because the user clicked the checkbox "Add archives to all non-dead references" on the tool page. — JFG talk 15:46, 7 July 2017 (UTC)
- Please read more carefully.
Editors are encouraged to add an archive link as a part of each citation, or at least submit the referenced URL for archiving,[note 1] at the same time that a citation is created or updated.
See also this how-to guide. If you have a problem with these how-to guides, please take up your issue with the guide pages, not with editors who follow the guides in good faith. – Jonesey95 (talk) 16:48, 7 July 2017 (UTC)- Yeah, adding archive links to all cited URLs is a good thing. We have no idea if or when a URL may suddenly become unavailable. Also, it helps if the cited reference is updated or changed (thereby removing the cited information) because the originally cited version is then preserved. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 22:34, 7 July 2017 (UTC)
- Please read more carefully.
- @Jonesey95: Encouraged to fix dead links, yes; to add blanket archives to all live sources, no. That's because the user clicked the checkbox "Add archives to all non-dead references" on the tool page. — JFG talk 15:46, 7 July 2017 (UTC)
- JFG, please refer to Wikipedia:Link rot, where this practice is explicitly encouraged. – Jonesey95 (talk) 14:59, 7 July 2017 (UTC)
- Thanks for the clarification, Xaosflux; indeed it wasn't clear to me that IABot is manually activated and available to anyone. Will just monitor this user account for a while… — JFG talk 12:23, 7 July 2017 (UTC)
- Just as an FYI to all admins on Wikipedia. Admins on Wikipedia are also admins on the IABot Management Interface. If any user is abusing the tool and being disruptive with it, admins can look up the target user on the interface and block them. Consequently, if they are blocked on Wikipedia, they are also blocked on the interface. So there is more than 1 method to stop tool abuse.—CYBERPOWER (Chat) 17:36, 7 July 2017 (UTC)
I nominated myself for BAG
Hello everyone!!! I just nominated myself for BAG membership. Your participation would be appreciated.
Wikipedia:Bot Approvals Group/nominations/Cyberpower678 3—CYBERPOWER (Message) 23:51, 9 July 2017 (UTC)
Bots Newsletter, July 2017
Bots Newsletter, July 2017 | |
---|---|
Greetings! Here is the 4th issue of the Bots Newsletter (formerly the BAG Newletter). You can subscribe/unsubscribe from future newsletters by adding/removing your name from this list. Highlights for this newsletter include:
BU Rob13 and Cyberpower678 are now members of the BAG (see RfBAG/BU Rob13 and RfBAG/Cyberpower678 3). BU Rob13 and Cyberpower678 are both administrators; the former operates BU RoBOT which does a plethora of tasks, while the latter operates Cyberbot I (which replaces old bots), Cyberbot II (which does many different things), and InternetArchiveBot which combats link rot. Welcome to the BAG!
We currently have 12 open bot requests at Wikipedia:Bots/Requests for approval, and could use your help processing!
Wikimania 2017 is happening in Montreal, during 9–13 August. If you plan to attend, or give a talk, let us know! Thank you! edited by: Headbomb 17:12, 19 July 2017 (UTC) (You can subscribe or unsubscribe from future newsletters by adding or removing your name from this list.) |
Wikimania_2017
- I intent to attend. All the best: Rich Farmbrough, 18:43, 19 July 2017 (UTC).
- 90% likely for me.—CYBERPOWER (Message) 02:29, 20 July 2017 (UTC)
- UPDATE: I'm confirmed to attend. Due to costs, I will be attending the main conference days only and possibly present alongside with Internet Archives.—CYBERPOWER (Chat) 13:56, 22 July 2017 (UTC)
- I won't be there, sadly. After they announced the door costs, it all added up to be too much for me. Declining my proposals without bothering to even tell me didn't sit well either. ~ Rob13Talk 18:25, 20 July 2017 (UTC)
- @BU Rob13: Is CopyrightforDummies one of your declined proposals? It's a completed submission for Wikimania 2017 ready to be reviewed by a member of the Programme Committee. No indication that it's been reviewed yet. When do they do that? wbm1058 (talk) 21:16, 20 July 2017 (UTC)
- It's not on the schedule they've already released, so it's evidently declined. I never received any email from them. ~ Rob13Talk 21:43, 20 July 2017 (UTC)
- @BU Rob13: Is CopyrightforDummies one of your declined proposals? It's a completed submission for Wikimania 2017 ready to be reviewed by a member of the Programme Committee. No indication that it's been reviewed yet. When do they do that? wbm1058 (talk) 21:16, 20 July 2017 (UTC)
- I wanted to be there in order to organise a debate on bots and how to optimise our work in English Wikipedia but unfortunatelly I won't since the persons I contacted to co-organise this did not see enough movitated for that. -- Magioladitis (talk) 18:30, 20 July 2017 (UTC)
- My article alerts talk was declined (sadly, [11]), but the one on WP:JCW was accepted ([12]). I think it might be possible to present my Article alerts at the pre-conference or something though (see [13]). Headbomb {t · c · p · b} 20:06, 20 July 2017 (UTC)
- Yes, the registration fees are a big turnoff. Relative lack of familiar names on the scholars list is another turnoff. Despite all that, I'm considering a day trip as a close relative lives within a 2 1⁄2 hour drive. Not sure which would be the best day. Not particularly interested in sitting through public-relations oriented talks given by WMF insiders. I can watch the videos online later (assuming that they're posted). wbm1058 (talk) 21:45, 20 July 2017 (UTC)
- There's always been registration fees though. I can't recall what they were in the past exactly, but the early bird prices seemed comparable to what they have been in the past. I could be misremembering though. Headbomb {t · c · p · b} 15:28, 21 July 2017 (UTC)
- @Headbomb: In 2016, you could get into the entire conference (including preconference, without lunch or dinner) for 35 euros (~$41). You could get into the entire conference with lunch and dinner and accomodations for the entire time for 500 euros (~$582). The regular rate door cost alone with both days of preconference included is $415 this year (no accomodation, lunch included). That's a huge increase over last time around. ~ Rob13Talk 15:46, 21 July 2017 (UTC)
- I think I may be mixing the per day cost of this year with the total cost of other year. Seems like a utterly crazy increase if that's the case, especially when WMF could easily fund all registrations. Headbomb {t · c · p · b} 16:09, 21 July 2017 (UTC)
- Indeed. I'm a bit bewildered. The most likely explanation is that Montreal is a more expensive city, but if that's the case, why go to Montreal? I know of at least one admin who lives in Montreal and isn't attending because of the door cost. I'm worried for those who made travel plans before the registration costs were announced. Those people might be in a difficult spot if they hadn't budgeted this much for the conference. Luckily, my flight arrangements were with airline miles. I've lost those miles, but at least I'm not out a substantial sum of money. It's horrible PR to have a conference on free knowledge that is more expensive than the combined costs to attending three days each of New York Comic Con, RTX Austin, and PAX East. ~ Rob13Talk 16:16, 21 July 2017 (UTC)
- London 2014 was GBP 50.00 for 3 days, incl. lunch and dinner, excl. accommodation. Navigating your way around the rabbit warren that is the Barbican Centre (and finding that you'd missed half the presentation) was gratis. --Redrose64 🌹 (talk) 22:37, 21 July 2017 (UTC)
- Indeed. I'm a bit bewildered. The most likely explanation is that Montreal is a more expensive city, but if that's the case, why go to Montreal? I know of at least one admin who lives in Montreal and isn't attending because of the door cost. I'm worried for those who made travel plans before the registration costs were announced. Those people might be in a difficult spot if they hadn't budgeted this much for the conference. Luckily, my flight arrangements were with airline miles. I've lost those miles, but at least I'm not out a substantial sum of money. It's horrible PR to have a conference on free knowledge that is more expensive than the combined costs to attending three days each of New York Comic Con, RTX Austin, and PAX East. ~ Rob13Talk 16:16, 21 July 2017 (UTC)
- I think I may be mixing the per day cost of this year with the total cost of other year. Seems like a utterly crazy increase if that's the case, especially when WMF could easily fund all registrations. Headbomb {t · c · p · b} 16:09, 21 July 2017 (UTC)
- @Headbomb: In 2016, you could get into the entire conference (including preconference, without lunch or dinner) for 35 euros (~$41). You could get into the entire conference with lunch and dinner and accomodations for the entire time for 500 euros (~$582). The regular rate door cost alone with both days of preconference included is $415 this year (no accomodation, lunch included). That's a huge increase over last time around. ~ Rob13Talk 15:46, 21 July 2017 (UTC)
- There's always been registration fees though. I can't recall what they were in the past exactly, but the early bird prices seemed comparable to what they have been in the past. I could be misremembering though. Headbomb {t · c · p · b} 15:28, 21 July 2017 (UTC)
Including "general fixes" on a current BRFA
Hello all, I'd like to get some additional feedback on Wikipedia:Bots/Requests for approval/Yobot 55 - as to if "any any genfixes" is appropriate to bundle in here or not. I'm on the fence - this is a very fine technical task that may already be confusing for some editors to determine what occurred - but I'm also generally in support of not wasting edits and making the page better all at once. Please respond at the BRFA. Thank you, — xaosflux Talk 15:53, 23 July 2017 (UTC)
Cydebot appears to be down
Right now, while we're in the middle of a huge list of speedy renaming of categories, Cydebot appears to have stopped working. It would be nice if some other bot could help out. עוד מישהו Od Mishehu 02:58, 24 July 2017 (UTC)
- See later discussion at User talk:Cyde#Please stop the bot!. Thincat (talk) 09:14, 24 July 2017 (UTC)
Update on Magic Links progress
As of this particular moment, the category counts for ISBNs, PMID, and RFC magic links are 1102, 1189, and 2067, respectively. Obviously RFC has been deemed enough of a problem that manual oversight is required, but I thought I'd mention it. The remainder of the ISBN/PMID pages are either on transcluded pages (i.e. the huge batch of AFD Log pages), in userspace (which I have agreed to avoid), or odd cases which manual editing will be required. I don't know what MW's timeframe for turning off magic links is/was, but I think we're at the point where en-wiki can turn them off with little to no issue. I'm not sure if that's something for us specifically to do, but I figured an update on the situation would be helpful.
There are new cases popping up (mostly in the article space) daily, so the bots will probably keep running, but the bulk of the work (249k out of 250k pages) is complete. Primefac (talk) 12:39, 23 July 2017 (UTC)
- Nice. I've been hacking away are other identifiers, and non-magic link ISBNs / PMIDs, but help with them would always be good. Those aren't categorized, so it's hard to keep track of the numbers.
- OCLC should be easily doable by the other bots. The structure is pretty much the same as PMID.
\[?\[?OCLC\]?\]?(:| )?(\d+)
→{{OCLC|$2}}
has worked well for me. Headbomb {t · c · p · b} 12:53, 23 July 2017 (UTC)
I think we should exclude from the list all the "Wikipedia:Articles for creation/..." pages. -- Magioladitis (talk) 13:50, 23 July 2017 (UTC)
- What, all 8 of them? Primefac (talk) 13:52, 23 July 2017 (UTC)
- Primefac Yes. And "Wikipedia:Articles for deletion" too. -- Magioladitis (talk) 14:06, 23 July 2017 (UTC)
- Out of curiosity, why? Primefac (talk) 14:06, 23 July 2017 (UTC)
- Primefac Because the reflect old archived contents not really suitable for searching material form there and it would be better not to overload these archives with templates. -- Magioladitis (talk) 14:12, 23 July 2017 (UTC)
- That's reasonable. As I said above, there are pages that just won't get changed for one reason or another, and people will just have to deal with it. Primefac (talk) 14:15, 23 July 2017 (UTC)
- True. Nobody really expects these pages to work fine anyway. We have re-purposed many templates in the course of time. But the main argument is still that these pages do no serve as links to other pages. -- Magioladitis (talk) 14:19, 23 July 2017 (UTC)
- FWIW, there are pages with magic links that have not been added to the category yet. That's why you see AFD daily pages, but not the transcluded pages that actually contain the magic links, in the category. That means that there are additional pages that still have not made their way into the category. This is the long-standing Phabricator bug T157670. Legoktm might be able to do something about it for us. – Jonesey95 (talk) 15:48, 23 July 2017 (UTC)
- True. Nobody really expects these pages to work fine anyway. We have re-purposed many templates in the course of time. But the main argument is still that these pages do no serve as links to other pages. -- Magioladitis (talk) 14:19, 23 July 2017 (UTC)
- That's reasonable. As I said above, there are pages that just won't get changed for one reason or another, and people will just have to deal with it. Primefac (talk) 14:15, 23 July 2017 (UTC)
- Primefac Because the reflect old archived contents not really suitable for searching material form there and it would be better not to overload these archives with templates. -- Magioladitis (talk) 14:12, 23 July 2017 (UTC)
- Out of curiosity, why? Primefac (talk) 14:06, 23 July 2017 (UTC)
- Primefac Yes. And "Wikipedia:Articles for deletion" too. -- Magioladitis (talk) 14:06, 23 July 2017 (UTC)
I've proposed adding a bot section to the dashboard. Comments welcomed. Headbomb {t · c · p · b} 17:26, 25 July 2017 (UTC)
Archiving links not dead - good idea?
The new IABot interface tool allows editors to archive all links in an article even when not dead, see Al83tito edit history (example diff w/ 563 links added). Unlike other tools that operate on a single page, this is more like unattended AWB with a queue system giving great powers to editors who enjoy the ability to make massive edit runs with little effort. This feature can be run by any editor on-demand without needing prior consensus or approval.
We should have a discussion because this feature is not being met with complete acceptance. User talk:Al83tito talk page has example complaints. There is an open RfC at Village Pump for doing this for all articles on Wikipedia via bot. This discussion concerns the IABOt interface tool which does the same thing on-demand. My opinion this feature is powerful and apparently disruptive enough it should have more community discussion. Do we want to have this feature (archiving live links on-demand)? If so, do we want to allow it for mass edits with an AWB-like tool? And if so, do we want an approval mechanism such as AWB, or a bot approval like BOTREQ? Or leave things as they are? @Cyberpower678, Al83tito, JFG, Lordtobi, and Dhtwiki: -- GreenC 14:51, 10 July 2017 (UTC)
- Operator comment: The context is distorted here. Only the single page tool that makes edits directly from the user's account allows for the option to rescue non-dead links, and inside references only. The bot queuing tool has NO options, and uses the bot's default configuration values as defined on the wiki's on wiki configuration page for the bot.—CYBERPOWER (Chat) 15:28, 10 July 2017 (UTC)
- Clarification comment: The IABot Management Interface is a permission based tool. Permissions are auto acquired based on the user's wiki tenure. After 10 edits and 10 days, users can use the single page analysis tool and queue the bot up to 500 pages per job. They can also report false positives. After 1000 edits and 3 months of activity, users can make alterations to the bot's DB that has an impact on the behavior of IABot, as well as queue 5000 pages per job. Administrators have even more power, by being able to queue up to 50,000 pages and can change user permissions, as well as block users on the interface, and alter URL data at the domain level. There's the bot user group, that grants high speed access to the tool's API, and then finally tool roots, like me and GreenC, have access to all of the tools and features. Everything is thoroughly logged, so disruptive users can be easily identified.—CYBERPOWER (Chat) 15:34, 10 July 2017 (UTC)
- IMO the usage of the interface should not be made available to users who just happened to be here for a certain amount of time and a couple edits (e.g. you would register, edit a few pages, wait a few weeks, then use IABMI twice, 500 pages each, and you'd be good to go to spam 5000 pages into it; this makes abuse too easy). As noted below, it should either adpot AWB's permission system, or receive its own in a similar fashion. If neither of these is executed, the current should be toned down to extended confirmed users, and then just 100-or-so pages. Lordtobi (✉) 15:43, 10 July 2017 (UTC)
- And how is summoning the bot to 500 pages any different than the bot eventually showing up on the page on its own? The queue tool has no options, so if you send the bot there, it will behave like it normally does.—CYBERPOWER (Chat) 15:55, 10 July 2017 (UTC)
- @Lordtobi: Just for complete and total clarification, this interface's development was focused completely around abuse mitigation at its core, and considerable development time went into that before I built this interface over the core. Just so you know, there are two different tools that let users rescue links on Wikipedia. The first tool is called the Single Page Analysis Tool. This tool lets users analyze and edit articles right on the spot, and the edits are made from the user's account. In addition to that, they also have the option to change the bot's behavior by clicking a checkbox. This tells the bot to restrict operations to references only and rescue ALL links instead of just the dead ones. If anyone abuses that, it's simply a matter of stopping the user. They can be blocked from using the interface by any admin via two methods. The first one is on the tool interface directly. Users can still login but will receive a block message when trying to use it. The second option is to block the user directly on Wikipedia, which carries over to the tool. Any blocked user on Wikipedia is blocked from the interface as well. The second tool is the Bot Queuing Tool which lets users queue the bot up on as many pages as allowed by their permissions and watch the bot run through the list. This tool has NO options so the bot runs exactly the same way as it usually does with the exception of the username of the requesting user being linked in the edit summary. This can't be abused if all you can do is tell the bot to come visit a particular page and ask it to do what it usually does. You can't tell it to rescue all LIVE links as well as DEAD, and it's this reason of abuse I intentionally left that option out. So I'm not inclined to restrict the permissions.—CYBERPOWER (Chat) 16:12, 10 July 2017 (UTC)
- IMO the usage of the interface should not be made available to users who just happened to be here for a certain amount of time and a couple edits (e.g. you would register, edit a few pages, wait a few weeks, then use IABMI twice, 500 pages each, and you'd be good to go to spam 5000 pages into it; this makes abuse too easy). As noted below, it should either adpot AWB's permission system, or receive its own in a similar fashion. If neither of these is executed, the current should be toned down to extended confirmed users, and then just 100-or-so pages. Lordtobi (✉) 15:43, 10 July 2017 (UTC)
- Generally, I would say that there should be two regulations: Foremost, a permission that hinders users from using it disruptively, or respectively a subsequent exclusion from the tool usage if abused. I would suggest that it would receive the same rights management as has AWB, as both have a similar features (easy and quick [semi-]automated Wiki editing). A second matter would be limiting the tool to just dead link repairs, so that there is no archival of live source where not needed, and does not take up as much space before necessity. You could give the possibility of archiving live sources that are expected to die soon (for specific WP Video games examples, 1UP.com was closed in 2012 and GameTrailers' website in 2016, links from both sites started going down last year, and they could have been archived before that happened.
- A specific issue here also comes with Al83tito not taking part in the community, does not respond to talk page messages, and is seemingly making such disruptive edits deliberately, despite numerous attempts tell them to cease them.
- Now, off-topic [partially]: Often, links are considered dead by bots, just because they don't (or no longer) redirect to their latest URL position (such as with massive shift-arounds on IGN, GameSpot, GameRevolution over the past six years or so), and the bot archives the old version instead of looking for the new one. It would be nice if there was a tool, or a feature to AIbot, that manages to update links (also in cases where there no longer is a direct redirect), to prevent linkrot from the beginning, and never, or at a later point time, find it necessary to archive the link. Lordtobi (✉) 15:06, 10 July 2017 (UTC)
- Question Do we want the IABot Tool to give users the option to add archives to good links like this? -- GreenC 17:01, 10 July 2017 (UTC)
- Answer: Yes we do. (((The Quixotic Potato))) (talk) 17:11, 10 July 2017 (UTC)
- The problem is this tool has become a global tool. So it's no longer just in enwiki's discretion.—CYBERPOWER (Chat) 17:05, 10 July 2017 (UTC)
- It can probably detect the language of the target article and abort if not whitelisted. -- GreenC 17:56, 10 July 2017 (UTC)
- I'd really rather not bloat the code in that manner. IABot is already quite complex as it is.—CYBERPOWER (Chat) 18:01, 10 July 2017 (UTC)
- It can probably detect the language of the target article and abort if not whitelisted. -- GreenC 17:56, 10 July 2017 (UTC)
- I doubt it would be that difficult, but the least bloat and complex would be not have the feature :) -- GreenC 19:54, 10 July 2017 (UTC)
- All links should get archived, I do not see an advantage to limiting ourselves to only dead ones. (((The Quixotic Potato))) (talk) 17:10, 10 July 2017 (UTC)
- Archiving and archive-citing is good: Wikipedia rules state that: Editors are encouraged to add an archive link as a part of each citation (Wikipedia:Link rot) What is the downside of using an automated tool like IABot to do that work? Actually, IABot and its prolific use will be of great service to Wikipedia. I would like to hear any substantive arguments of how this is not the case. (talk) user:Al83tito 18:50, 10 July 2017 (UTC)
- Unclear where that phrasing Editors are encouraged originated, from consensus or 1 user's opinion, probably the later since it's an wp:essay. -- GreenC 19:54, 10 July 2017 (UTC)
- That essay includes the phrase "...at the same time that a citation is created or updated", which indicates that it's the editor who knows the article text and what references support it who should be creating the archive link, someone who will audit IABot's sometimes imperfect output, and who knows what links have gone dead because the site has been reorganized (and thus an archive snapshot isn't needed, or that it's time to use the more timely information put on the front page) and when information has been deleted (and hopefully an archive snapshot is available). Nothing there about editors with little involvement curating a page making massive additions where they couldn't possibly be checking the viability or usefulness of the links provided by the bot. Dhtwiki (talk) 23:54, 14 July 2017 (UTC)
- Web-archiving and citing only has upsides. Can someone argue to the contrary? I read above several procedural arguments, but none really substantive, trying to argue for limiting the use of IABot and automated web archiving and citing. But, could someone actually spell out when would it be bad for anyone to do archiving and cite-archiving of existing references in wikipedia?
- @GreenC: stated that "giving great powers to editors who enjoy the ability to make massive edit runs with little effort. This feature can be run by any editor on-demand without needing prior consensus or approval." and specifically point to an article I edited using the IABot, which was able to archive and cite-archive 500+ citations almost instantly. Could someone please elaborate how that goes against the Wikipedia rules and Wikipedia's spirit? When would that ever not be a great thing and time-saver in doing something that is actually necessary?
- Archiving and citing the archived url of existing references has zero impact on the actual content/information of the article, while it improves citations by making them more resilient to linkrot, therefore making those citations more durable, and contributing to a more long-lasting article.
- Whether 10 citations are improved at a given time or 500 I think is besides the point. In the case of web archiving and citing, does size matter in a negative way? I just don't see why pointing to the size edit, or edit ease, is relevant. Making big edits is not in itself bad. I would like to hear more substantive arguments about the quality, intent, and actual value of the edits themselves... Thank you.(talk) user:Al83tito 17:05, 10 July 2017 (UTC)
- Hi user:Al83tito Can we agree to differentiate between adding archive links for dead links, which everyone agrees is a good idea; and adding archiving links for live links which is somewhat controversial. This thread concerns the later. Thanks. -- GreenC 19:54, 10 July 2017 (UTC)
- Downsides and some stats / info:
- There are about 20 million external links on enwiki. About 2 million of those are dead. The rest are live.
- Adding 18 million additional archive links (20 million - 2 million = 18 million live links) creates bloat and complexity in the wikitext, in the end fixing links that are not broken.
- IABot is already able to add archives when a link dies so there is no big gain doing it proactively.
- All live links on Wikipedia are already being saved to Wayback via an automated script.
- Wayback Machine is not a static database it is dynamic. Wayback archives that work today might not work tomorrow for various reasons. Site owners can request it be blocked for example at Wayback (different from robots). This then causes problems with bots which are unable to replace with a working snapshot at archive.is since it's already archived at Wayback in the wikitext (prior to the block). The best results are found by choosing the archive at the time its needed.
- Adding archive links to every URL creates a large maintenance overhead due to #5. They need occasional check and resetting. Doing this for 2 million links will take 3-6 months of 24x7 processing by a complex bot. For 20 million will be nigh impossible due to resource limits.
- Other archive services such as Webcitation and Archive.is may not last. They are underfunded and could disappear anytime. Other services have already disappeared. Fixing those links when it happens will take considerable time and resources the more links there are to deal with.
- -- GreenC 19:54, 10 July 2017 (UTC)
- The fact that a link is currently live does not mean it will always be live, which is why we use archives. In order to archive a link it NEEDS to be live... I am quite curious where you got those numbers from. What was the methodology used? For example: how does the spider know if a page has changed? Or if it is a 404 page?
- The best way to reduce the amount of characters required to link to an archived version of a page is probably to start a link shortening service that can be used for archive links. Something about babies and bathwater.
- The Wayback machine does not index the entire internet. Asking the Wayback machine to index (and even prioritize) the links we want indexed is a very good idea.
- Please post a link and the current status. Is it at 100%?
- We all wish there was something better than the Wayback machine. Webcitation is also far from perfect.
- I don't think this maintenance overhead will be significant. Why would it be? We can ask archive.org to let us know when they remove a page we have linked to, right? That way a bot can run every day to delete the links that archive.org has deleted the day before.
- Deleting archive.org links if and when the wayback machine goes down permanently is totally do-able. And webcitation and archive.is are used far less frequently.
- (((The Quixotic Potato))) (talk) 20:04, 10 July 2017 (UTC)
- I've doing it almost full-time for the past 18 months. My bot WaybackMedic has over 20,000 lines of code and runs 24x7 on the IABot database, as well as in the wikitext you can see diffs here. I know how long it takes and what's involved and the scope of the problems. The numbers I quoted are roughly accurate. -- GreenC 20:55, 10 July 2017 (UTC)
- Thank you. I'll take a look. (((The Quixotic Potato))) (talk) 21:06, 10 July 2017 (UTC)
- TQP's #3 seems a compelling reason to attempt to add archive snapshots when creating references. Absent any other feedback, Wayback could register what links are wanted. But it should also be able to learn that by examining URLs in a Wikipedia data dump and get a more comprehensive view without all the trouble of people or bots proactively linking to its site. Dhtwiki (talk) 00:05, 15 July 2017 (UTC)
- After doing some research I think I understand @GreenC: better. I agree with Dhtwiki that it would be a good idea to use regex to make a list of URLs each month, based on the latest data dump, and ask the Wayback Machine to archive those URLs but GreenC wrote: "All live links on Wikipedia are already being saved to Wayback via an automated script.". I am not sure which script that is. Ideally archive.org would tell us which of the URLs we use they've removed from their archives. (((The Quixotic Potato))) (talk) 11:18, 15 July 2017 (UTC)
- I've doing it almost full-time for the past 18 months. My bot WaybackMedic has over 20,000 lines of code and runs 24x7 on the IABot database, as well as in the wikitext you can see diffs here. I know how long it takes and what's involved and the scope of the problems. The numbers I quoted are roughly accurate. -- GreenC 20:55, 10 July 2017 (UTC)
- This is a bad idea.
- Freeze the tool NOW until this is resolved.
- Stop archiving non-dead URLs.
- Think seriously about why we have so many auto-fuckup-bots running against clear policy. CITEVAR is another one that seems to be imposed against humans, but 'bots get a free ride against it.
- Andy Dingley (talk) 19:56, 29 July 2017 (UTC)
- User_talk:Mramoeba#Archiving_non-dead_links "I fully intend to archive links using wayback when editing Wikipedia from now on. " Oh deep joy 8-( Andy Dingley (talk) 20:48, 29 July 2017 (UTC)
- Hey what is the problem, from now on I intend to use the manual archive parameter in cite web to create an archived link to the reference. To proactively prevent link rot. I literally cannot see what the problem is here. With respect, i've asked you several times now to explain if I am misunderstanding or if there is an issue with doing that but so far apart from potentially confusing readers I haven't been offered any reason not to. Incidentally, when you use the parameter in the template the reference is more confusing to a casual reader, if the archive bot does it the archive follows the original in a logical order. But that's just a minor observation. Mramoeba (talk) 11:48, 30 July 2017 (UTC)
@Andy Dingley: Which bots get a free ride against WP:CITEVAR? Headbomb {t · c · p · b} 01:20, 30 July 2017 (UTC)
- AWB is a regular. Whitespace in the layout of wikicode (a boon to those editing pages, and invisible to readers) gets stripped. Andy Dingley (talk) 01:31, 30 July 2017 (UTC)
- AWB is not a bot though. It is used by many bots, but it does not mess with WP:CITEVAR. Headbomb {t · c · p · b} 01:33, 30 July 2017 (UTC)
- Sorry for my delayed follow-up; other matters required my attention, and I will be off-line for some time again after this post. However, I wanted to at least acknowledge some of the replies, before I go back in to a long silence.
- @GreenC:, I have read your response, I would like to respond to it more fully when I find more time later on. I appreciate your thoughtful points, and while I would like to share a different perspective on some of them, I will need to wait until later to do that properly.
- However, what I did not want to postpone was to thank you for having created the WaybackMedic, as well as to other initiatives to archive all live links shortly after they are cited in any Wikipedia article (I was unaware of it until you mentioned it here). That is a very good contribution to the Wikipedia project and community, and I very much thank you for all the excellent work you have put into it.
- GreenC, you also has special insight into how much time or computational power it requires to archive (your point #6 above), or to replace defunct archiving resources (your point #7 above). I respect your insights into resource demands and constraints, and I would like you to please shed some more light into it. You said that 2 million links will take 3-6 months of 24x7 processing by a complex bot. For 20 million will be nigh impossible due to resource limits. Where is the processing done? And how much processing power is being used for it? Is it done at the Wikimedia Foundation servers, or is it performed by a computer of yours that is dedicated to it? I would like to understand what is the bottleneck in the process, to then understand better if something could be done now or in the future to ease those constraints, and how could I and others in the community support that effort.
- I appreciated many points made above on reasons for and against citing archived links (and using bots to speed up the process), but I still believe that even if the archiving & archive-citing solutions are imperfect, they are still better to cite archived links than not, even when they are still live. I understand that just stating a belief is insufficient, and I aim to elaborate further my arguments at a later time. Thank you.(talk) user:Al83tito 01:39, 30 July 2017 (UTC)
- I'd be happy to post a detailed reply on technical issues about my program but you will need to trust the bottlekneck is largely outside our control. However it shouldn't be the central issue. Some editors want to have every link archived, others don't. There are lots of reasons for this either way. There should be a discussion about it because of stuff like this is happening every day, there is constant back and forth churn. (@JFG: since I linked your diff) -- GreenC 03:57, 30 July 2017 (UTC)
We need a centralized debate to define a community guideline about archiving of live sources. However, a number of bot-assisted edits may be due to confusion by users clicking the only checkbox on the IABot page, which says "Add archives to all non-dead references". I have requested a UI improvement at User talk:cyberpower678#IABot: suggest UI clarification. — JFG talk 07:37, 30 July 2017 (UTC)
Bad idea. Have any of you "archive all links" enthusiasts considered whether the Wayback Machine would be able to handle the increased traffic if you replaced 20 million links with archived links? It's been timing out for hours now. A much better way to handle a dead link would be to have a bot ping the two most recent editors on the page with a "dead link" notice, then check back a week or two later to see whether the problem has been rectified (I have no idea whether that's technically feasible). Often the reason for the dead link is that the publisher moved the article, and a search for author and title will provide the new active link. Follow-up on Wayback Machine: Just got this from one of those bot-generated archived links. Space4Time3Continuum2x (talk) 19:54, 30 July 2017 (UTC)
- Yes, I do, and their servers can handle it just fine. I wouldn't be providing it as an option if they couldn't handle it. I'm in constant communication with IA, including the server admins. It's been timing out because it was taken down for maintenance. So what's wrong with the link you just posted? Seems to work for me.—CYBERPOWER (Around) 20:41, 30 July 2017 (UTC)
- While I was posting, I noticed that the search tab changed to the Wayback Machine, but it had returned a 404 message. Generally speaking, Wiki editor isn't the most comfortable editing tool around, and making editors scroll more by increasing the length of links adds to the difficulty of editing. Also, if it's a page on current events with several changes every day, including links being removed and added, you then get a hodgepodge of archived and unarchived links, and both can be either live or dead, as I just discovered. For example, after this bot-generated edit I couldn't open the archived links and reverted the entire edit - I don't remember if the connections timed out or if there was another error message. After reverting, I spot-checked a few links I had added myself during the preceding weeks and discovered that the bot hadn't found the one broken link among the ones I spot-checked - path had changed, easy fix for human editor. [1] The bot-archived link only returns a 404, BTW. I tried to look up the article on the Wayback Machine and discovered that, while www.newyorker.com was crawled numerous times, everything archived since some time in 2016 comes back with 404.
- If the bot can't tell whether the archived link is working or not, replacing working links (and broken ones that can be fixed, for that matter) with archived error messages is counterproductive. Space4Time3Continuum2x (talk) 14:20, 31 July 2017 (UTC)
- That archive you pointed to is working fine. It loads an article about Ivanki Trump.—CYBERPOWER (Chat) 17:04, 31 July 2017 (UTC)
- I can see the article for about half a second, and then I get {"statusCode":404,"error":"Not Found"} - tried Chrome and Firefox on two systems/two different providers. Space4Time3Continuum2x (talk) 18:07, 1 August 2017 (UTC)
- That archive you pointed to is working fine. It loads an article about Ivanki Trump.—CYBERPOWER (Chat) 17:04, 31 July 2017 (UTC)
- If the bot can't tell whether the archived link is working or not, replacing working links (and broken ones that can be fixed, for that matter) with archived error messages is counterproductive. Space4Time3Continuum2x (talk) 14:20, 31 July 2017 (UTC)
Good Idea. I think that a major argument against is the bloating of the code. I have been editing Wikipedia for about 7 months. When I first looked at the code I couldn't believe how messy it was; in my humble opinion it is horrible. I don't think adding archive links will make it appear any more bloated. To reduce the bloating I have discovered 2 templates that I now use for citations. List Defined References removes all of the citation code to the bottom of the page, so at the citation point all you do is call the citation name using Template:R. The wiki text would be tidy and readable and not susceptible to bloating. All editors need to be educated to use this template. The remaining issues are out of my level of understanding and I'll leave them for others to discuss. Every link I cite is archived. It will be a hell of a job to go back and and recover archived urls to each citation I have created once the link has died. 8==8 Boneso (talk) 04:33, 31 July 2017 (UTC)
- Very good idea. Linkrot is growing. Just as Wikipedia grew and its sourcing improved, so will linkrot catch up to this. Domain names expire, content can be changed, hosting gets taken down, sites get reformatted, etc. Yes, we generally cite more reliable sources that last longer, but this only delays the problem. Having worked on linkrot and archiving first-hand, I cannot stress how important preemptive archiving can be. Once a link goes dead, there is no guarantee it was ever archived by IA or anyone. At this time, I support archiving every link and asking archivers to prioritize sites that are actually used by an encyclopedia. IA is a library and we should deposit as many used links as we can before they are lost. The inconveniences of markup bloat, server costs, edits, etc. pale in comparison to losing access to sources that are the staple of WP:V. We have bots to periodically check and recheck it. I only wish there was a more compact or centralized way to archive on Wikipedias. I'm afraid we'll have to do it manually and keep hoping we eventually get the sort of interwiki-to-wikidata transition. — HELLKNOWZ ▎TALK 10:16, 31 July 2017 (UTC)
References
- ^ Tolentino, Jia (November 29, 2016). "Ivanka Trump's Terrible Book Helps Explain the Trump-Family Ethos". The New Yorker. Archived from the original on March 7, 2017.
Problem with recent bot edits
Please take a look at WP:TH#Dead WSJ links and the VPT thead linked from it. We are getting complaints about bot edits made this past march by Bender the Bot 8. DES (talk)DESiegel Contribs 03:20, 27 July 2017 (UTC)
- Paging bender235. — xaosflux Talk 03:27, 27 July 2017 (UTC)
- What about InternetArchiveBot? 175.193.247.64 (talk) 04:01, 27 July 2017 (UTC)
- What about it?—CYBERPOWER (Chat) 14:14, 27 July 2017 (UTC)
- What about InternetArchiveBot? 175.193.247.64 (talk) 04:01, 27 July 2017 (UTC)
See WP:TH#Dead WSJ links and this edit where removing the s apparently changed an effectively dead link to a live and correct one. Is this something temporary at wsj.com, or are wqe going to have to get a bot to undo these changes from March? DES (talk)DESiegel Contribs 02:41, 27 July 2017 (UTC)
- You or they need to get a bot to undo Bender's edits since there's WAY too many. 175.193.247.64 (talk) 03:35, 27 July 2017 (UTC)
- I'm aware of the problem, see my talk page. There's a better fix than reverting to HTTP. --bender235 (talk) 05:10, 27 July 2017 (UTC)
- @Bender235: Can you supply a list of all pages your bot edited that may be affected by this? If so, I can fix this by bot or semi-auto with AWB. ~ Rob13Talk 05:17, 27 July 2017 (UTC)
- You'd have to check the contribution history. But in any case, I could fix it my self. --bender235 (talk) 14:32, 27 July 2017 (UTC)
- @Bender235: Can you supply a list of all pages your bot edited that may be affected by this? If so, I can fix this by bot or semi-auto with AWB. ~ Rob13Talk 05:17, 27 July 2017 (UTC)
- I'm aware of the problem, see my talk page. There's a better fix than reverting to HTTP. --bender235 (talk) 05:10, 27 July 2017 (UTC)
- Anything else? Did you find any bots to revert Bender's edits yet? 74.91.2.239 (talk) 23:00, 28 July 2017 (UTC)
@Bender235: and others: While the debate over which sort of link is a valid one, any discussion of this matter should note Wikipedia:Sockpuppet investigations/Nate Speed. – Train2104 (t • c) 00:38, 2 August 2017 (UTC)
- I'm not sure what this has to do with the WSJ links. Please explain. --bender235 (talk) 06:05, 2 August 2017 (UTC)
- One of the user in question's editing patterns is submitting editreqs to 'fix' such links. See the last few incidents in the archive. – Train2104 (t • c) 13:43, 2 August 2017 (UTC)
- I'm not sure what this has to do with the WSJ links. Please explain. --bender235 (talk) 06:05, 2 August 2017 (UTC)
This has just been created. Feel free to be WP:BOLD and add missing terms which you feel would be useful. Headbomb {t · c · p · b} 14:10, 7 August 2017 (UTC)
Behavior of MusikBot
Ponyo protected page Kitni Girhain Baaki Hain because of sockpuppetry, but this template was removed by the MusikBot saying that it is an unprotected page. SahabAliwadia 11:12, 19 August 2017 (UTC)
- @SahabAliwadia: it's because the protection ended. See logs. --Edgars2007 (talk/contribs) 13:25, 19 August 2017 (UTC)
- @Edgars2007: Thankyou for telling I thought that the sysop had protected it indefinitely. SahabAliwadia 13:27, 19 August 2017 (UTC)
Behavior of Xqbot
As soon as a page is moved, Xqbot fixes all the resulting double redirects right away, immediately and instantly. Also, the links to the fixed target pages are shown with the prefix "en:" in the edit summaries. I don't like this behavior, because it can lead to serious errors when there is page-move vandalism. The bot should return to its old behavior. GeoffreyT2000 (talk, contribs) 23:49, 11 August 2017 (UTC)
- GeoffreyT2000, what was the "old" behaviour? I was under the impression this was how it always worked (minus the wikilink change that appears to happen somewhere around 16 June 2017). Primefac (talk) 00:12, 12 August 2017 (UTC)
- Ping to Xqt out of courtesy. Primefac (talk) 00:13, 12 August 2017 (UTC)
- I would prefer that double redirect repair, especially of cross-namespace moves, be delayed (a 1-2x daily task perhaps?). Not too uncommon to find confused users moving their drafts in circles (user subpage, user rootpage w/ article title, project space, mainspace...). If I catch it in time I can move it back without leaving a trace by retracing their steps with suppressredirect. If I don't, I can't overwrite those redirects, and need to do page swaps and speedy tags. – Train2104 (t • c) 15:54, 13 August 2017 (UTC)
- Some delay would be likely be good. Personally, I think an hour before repairing would be enough in mainspace, and 24 hours in non-mainspace, but I'll defer to the bot op on what's best. Headbomb {t · c · p · b} 11:20, 19 August 2017 (UTC)
- Xqbot uses the moves log to find double redirects. New users cannot move pages around thus there is no real risk of vandalism. There are also some sanity checks against potential vandalism; the code is part of the mw:Pywikibot framework and public. Anyway I introduced some delay on my local repository's generator. Hope this helps. @xqt 11:06, 26 August 2017 (UTC)
- Some delay would be likely be good. Personally, I think an hour before repairing would be enough in mainspace, and 24 hours in non-mainspace, but I'll defer to the bot op on what's best. Headbomb {t · c · p · b} 11:20, 19 August 2017 (UTC)
Bots without apparent benefit
I am wondering if we have any policy, rules, or consensus on what to do with a bot where a) the bot isn't used by anyone but its operator and b) the operator hasn't edited Wikipedia for anything but the creation of this bot. Basically, a bot which is at best for the convenience of one reader, and at worst not used at all, but still editing every day.
Specifically, I am concerned about the recently approved User:Wiki Feed Bot, operated by User:Fako85. It makes 16 edits a day, to its own space, to subpages of Fako85, and to User:DNNSRNST, which is an editor with one edit (setting up his talk page for this bot, which wasn't even approved at the time). Fako85 has made no edits unrelated to this bot.
The value of having such a bot seems minimal, and I'm not sure that this value is sufficient to outweigh the potential risks (strain on servers? bot account hacking?). Fram (talk) 07:55, 8 September 2017 (UTC)
- @Fram: The projects vision is described on the bots page User:Wiki_Feed_Bot. I think the merit of this project should be judged on what it wants to achieve and not on the current usage. I develop this together with @EdSaperia:. He has made much more edits than I did. He also organised Wikimania 2014 in London. I've flown to San Fransisco from Amsterdam at my own costs, just to attend the Wikipedia dev summit and learn more about the culture. I'm a professional developer, but made many improvements to fit the Wikipedia platform better. Things that were not obvious from reading the docs. Demanding that developers also edit a lot is too restrictive in my view, but I wonder what the consensus about that is in the community. We're volunteers and we do our best to make it the best we can. We believe in this bot and that it can be useful for editors and readers alike. Many people at the summit liked the idea and we have some enthousiasts in our personal network. We're planning to make more people use it and will develop the bot further in an agile manner. We'll need permission to edit the user space if we want to test how our bot adds to Wikipedia. If we would need many people to use it before we get the rights we would be in a catch 22. Fako85 (talk) 08:30, 8 September 2017 (UTC)
- You have created the bot request in January. This is supposedly for some project. Where is this project discussed? What progress is being made on it? This is all very obscure, and unlikely to reach many people or benefit enwiki in this manner. (I also can't find any evidence of your improvements made as a developer, but in the end these don't really matter here in this discussion anyway). Fram (talk) 08:49, 8 September 2017 (UTC)
- Practice is to let WMF folks worry about performance, and hacked bots can be blocked if that ever happens. If the bot causes an issue with performance, WMF people will contact the operator, or block it if the operator isn't responsive. 16/edits a day, however, is nowhere near enough to even show on the radar. Pre-emptively denying bots because they might theoretically operated outside of their terms of approval is counterproductive when there's no evidence to suggest this will actually happen. Headbomb {t · c · p · b} 11:25, 8 September 2017 (UTC)
- If there were thousands and thousands of such bots that are low-resource-usage and infinitesimal-utility, which collectively used up a significant amount of server resources, it would warrant IMO to demand some sort of "bang for the buck" from bot operators. But from my knowledge that is not the case; and even if it was, it would require a change of policies before applying them to that particular bot (which does not seem to have any unusual risks).
- One might argue that Wikipedia is not a code repository or a test server (WP:NOTWEBHOST), and that therefore "convenience bots" should be banned regardless of used resources. But I would argue that the "web hosting" guideline is in place to prevent blatant abuse, and should be restricted to clear-cut cases (someone uploading their vacation photos, a bot mining Bitcoin for its operator...). Otherwise, it sounds like a decision for the WMF, not the community. TigraanClick here to contact me 11:47, 8 September 2017 (UTC)
- Fine, it just seems strange that we let people use enwiki as some kind of personal playground, and spend all that time on a BRFA for something without a real use for it. There is no evidence for most of the claims about some project working on this and no obvious way for people here to get involved with it, so it seems to me there is little or no reason to let it continue. Oh, and the number of edits isn't the problem as such, it's the potential resources they use by reading the recent changes log extensively each time. I have no idea how heavy such a read is. But WMF will rarely contact a bot operator here, approved bots are somewhat expected to be a burden on servers; but what when a bot does create such a burden, but for no real benefit? The WMF won't know it, and we don't care apparently... Fram (talk) 11:58, 8 September 2017 (UTC)
- Does anyone have any data about
how heavy [a read of a large portion of the recent changes log] is
? That is actually a point that I did not check when calling this a low-resource-usage bot. TigraanClick here to contact me 12:54, 8 September 2017 (UTC)
- Does anyone have any data about
- Not commenting about future policies, but I have the impression that the above case is not violating WP:NOTWEBHOST since they're user pages related to Wikipedia, without promotional links, which are also not fake or pov-fork articles. —PaleoNeonate – 12:28, 8 September 2017 (UTC)
- Fine, it just seems strange that we let people use enwiki as some kind of personal playground, and spend all that time on a BRFA for something without a real use for it. There is no evidence for most of the claims about some project working on this and no obvious way for people here to get involved with it, so it seems to me there is little or no reason to let it continue. Oh, and the number of edits isn't the problem as such, it's the potential resources they use by reading the recent changes log extensively each time. I have no idea how heavy such a read is. But WMF will rarely contact a bot operator here, approved bots are somewhat expected to be a burden on servers; but what when a bot does create such a burden, but for no real benefit? The WMF won't know it, and we don't care apparently... Fram (talk) 11:58, 8 September 2017 (UTC)
- What is outlined at User:Wiki Feed Bot is far from a "personal playground". The project is in development. If it amounts to something, great. If not, no harm was done. I can't see any reason to stop this bot, or halt development. Headbomb {t · c · p · b} 12:36, 8 September 2017 (UTC)
- What is outlined, perhaps. But none of this can be found in reality. This has been done in February 2016 at Wikitech[14], and abandoned then there in May: Fako85 doesn't seem to have done anything else there. Then in January 2017 this came here, to get approved months later after quite a few problems (wrt respecting our policies), and then ... nothing again. The "project" is the bot. There is no indication that anything is still "in development" at all. And it is not as if it is ready; look at User:Fako85/feed/breaking news central europe, which has at least as first article something related to Central Europe (but no breaking news in it), and then the Index of Hawaii-related articles (???), Hurricane Irma, ... This thing is very far from being useful, but not really in development either. Fram (talk) 12:52, 8 September 2017 (UTC)
- This is a single response to many things above. @Tigraan: once a day it goes over all edits from the day before. I talked to ops about this and at that time they didn't make objections. Let me know if this changed. I can see many possible performance improvements, but we'd prefer to work on better modules and a bigger user group first. @Fran: I think that judging based on one day result is anecdotal. It's a score based filter system and if nothing is there it shows random things. It's a matter of putting in a threshold for the score to prevent output if no breaking-news occurred that day. Breaking news is based on this: [15], but the bot shows something instead of nothing if there are no breaking news clusters. Project process is tracked here: [16]. Last development was end of July as you can see in the git history. One of my favourite quotes: "we're going slow, because we are going far". Arguing that a project is bad and should stop because it develops slow would argue to stop Wikipedia as new developments tend to take very long (understandably in my opinion). In general. I don't understand what you have against this project. The bot can only edit the user space and only if people place the template themselves. It would never pollute anything, that was one of the changes I had to make for the BAG approval. It is an opt-in thing. @Headbomb: thanks for your support. Fako85 (talk) 14:30, 8 September 2017 (UTC)
- "I think that judging based on one day result is anecdotal." True. I judged it over many days, and gave one as an example. Other days seem to be similar or if possible worse (e.g. here). It looks as if you have put your one-man project in production here way too soon. "the bot shows something instead of nothing if there are no breaking news clusters" is just weird. Fram (talk) 14:50, 8 September 2017 (UTC)
- Fram, this is getting close to WP:DEADHORSE territory. Headbomb {t · c · p · b} 15:25, 8 September 2017 (UTC)
- I respectfully dissent, Headbomb. If reading all changes from the previous day causes nontrivial server load, then Fram's continued questioning is very valid: this bot's value, although positive, is minimal, and then whether this project looks like it is going to go somewhere is a relevant question to ask.
- If the load on servers is insignificant, then yes, I would say to let the bot's creator alone, let them do their stuff and see what happens. But I do not think a WP:DEADHORSE invokation is justified; it would be more of a WP:BADGER and even that I fail to see. Or am I missing a key part of context? TigraanClick here to contact me 16:42, 8 September 2017 (UTC)
- Fram, this is getting close to WP:DEADHORSE territory. Headbomb {t · c · p · b} 15:25, 8 September 2017 (UTC)
@Fram: The BOTREQ contains the text: "Currently Wiki Feed does not use the RCStream. We're considering it, but we need some time to implement this as it requires a fair amount of changes to the system.". Maybe it is wise to ask Fako to switch to EventStreams? (((The Quixotic Potato))) (talk) 19:40, 8 September 2017 (UTC)
If I understand the BOTREQ correctly (specifically the edit dated 12:42, 22 July 2017) then the bot will have to check if all images it is using are still usable every 24hrs. Imagine if a lot of people use this bot, then that would mean a massive amount of requests, right? (((The Quixotic Potato))) (talk) 20:06, 8 September 2017 (UTC)
- @The Quixotic Potato: This is moot unless that happens and the devs tell us there is a server load issue. It's highly unlikely this will become an issue. ~ Rob13Talk 02:07, 18 September 2017 (UTC)
For those who have missed it this week, something bot-related. Headbomb {t · c · p · b} 02:52, 11 September 2017 (UTC)
HTTPS links: comprehensive source of domains to convert?
I've spotted User:Bender the Bot, User:KolbertBot and maybe others, helpfully converting HTTP links to HTTPS where sites have begun supporting encrypted connections since links were added to articles. It looks as if this is being done a few websites at a time based on prevalence of links to each site and ease of conversion (obviously much easier all round if http://example.com/page corresponds exactly to https://example.com/page without needing to further amend the URL). Has anyone considered using the rulesets established for HTTPS Everywhere to find many, many more sites that can have link conversion applied, including lots of obscure 'long tail' ones that are never going to get noticed by the bot operators? These rulesets are well tested because they are in daily use by HTTPS Everywhere's userbase, so there shouldn't be too many problems encountered where links are broken by the change, even if relatively complex regular expressions have to be applied rather than straightforwardly adding an 's'. See https://www.eff.org/https-everywhere/atlas/ for a list and https://www.eff.org/https-everywhere/rulesets for more info. If this is too complicated, would it be worth instead (or for starters) plundering the resource that is Chrome's HSTS preload list? Each of the sites on it has committed to serving web content through HTTPS only for the long haul, generally redirecting http:// URLs themselves (but thwarted if someone is intercepting traffic on a user's first visit, hence the need for a preload list shipped with the browser), and may have been considered a high-value target for surveillance/man-in-the-middle by the maintainers of the list. Either way, relevant work is being done in this area by outside parties that bot operators here could piggyback on. Beorhtwulf (talk) 16:27, 18 September 2017 (UTC)
Proposal for new magic word for short descriptions
This discussion was spread across multiple pages, including some village pumps, but organically ended up at a dedicated page. However, we are now at (or beyond?) the stage where this again needs wide publicity and participation. Please see the long discussion at Wikipedia talk:Wikidata/2017 State of affairs#Strategies for improving the descriptions and especially the "Proposal from WMF" subsection (currently the bottom one), where a new magic word is proposed and specific implementations of it discussed.
This is a discussion which will potentially impact all articles and affect the first thing all users get to see in mobile view and on apps (and elsewhere), so getting enough input on this is important. I have posted it at WP:CENT, WP:VPPR and WP:AN. Feel free to drop notes at other places.
I have included this noticeboard because, if this proposal or something similar gets accepted, there probably will need to be one or two big bot runs (and perhaps some clever bot programming) across many or all articles.
Please keep the actual discussion in one place if possible. Fram (talk) 07:07, 29 September 2017 (UTC)
Hi, I think this got approved too fast, and under a misconception. The BRFA was open for a grand total of 16 minutes, and was closed by a BAG member whose bot's (for lack of a better word) errors are the subject of the task. It was clarified that these are actually supposed to be medium priority lint errors, not high priority like the BRFA states. Pings: @Cyberpower678 and Nihlus:. Legoktm (talk) 07:46, 1 October 2017 (UTC)
- I think there is a disproportionate amount of disruption being caused by this pre-emptive attempt to fix possible future minor errors on talkpage signatures all of which date back over a year. People's watchlists are being alerted to these fixes, and folks are then spending time checking out. Better in this case would have been to allow the potential errors to stand, and if they did crop up in the future, to fix them then. SilkTork ✔Tea time 10:39, 1 October 2017 (UTC)
- When I approved, I was under the impression, this was a high priority fix. I also didn't bother to leave the BRFA open for long since it was a straightforward task. It was registering as a high priority on Special:LintErrors.—CYBERPOWER (Chat) 14:57, 1 October 2017 (UTC)
- In addition, the watchlist argument is a moot point. As I stated on my talk page:
bot edits can be hidden from view within the watchlist, and those that do not want to hide bot edits can learn what it is doing with one look and ignore the rest
. – Nihlus (talk) 15:50, 1 October 2017 (UTC)- Just a head's up Nihlus - users have been banned for making too many trivial edits. Or, rather, for ignoring the community's concerns regarding making those trivial edits. Perhaps the most notable is Betacommand: Wikipedia:Arbitration/Requests/Case/Betacommand 3. Arguments that users can turn off their watchlists, or just need to check a bot's edits the once, are not really seeing this issue from the perspective of the community. You have nothing to be concerned about in regards to this incident, but it is as well to pay attention to concerns from the community. SilkTork ✔Tea time 18:19, 1 October 2017 (UTC)
- @SilkTork: That is an extremely inappropriate comparison, especially when you consider a part of it can be considered an aspersion. Also, I am not ignoring the community; I am merely unconvinced by the arguments you presented. – Nihlus (talk) 18:52, 1 October 2017 (UTC)
- Shouldn't interpret it that way. ST is exactly right, and even stressed "You have nothing to be concerned about in regards to this incident". — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 14:31, 4 October 2017 (UTC)
- Eh, he's made no attempts to clarify, so if I was interpreting it wrongly, I assume he would have said something. Also, I don't take lightly to vague historical lessons over the misdeeds of others as if I am supposed to learn from it, especially when nothing wrong was done by myself (and especially especially if the user is mentioning blocks right after I expressed disagreement with their assessment). Nihlus 14:48, 4 October 2017 (UTC)
- Shouldn't interpret it that way. ST is exactly right, and even stressed "You have nothing to be concerned about in regards to this incident". — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 14:31, 4 October 2017 (UTC)
- @SilkTork: That is an extremely inappropriate comparison, especially when you consider a part of it can be considered an aspersion. Also, I am not ignoring the community; I am merely unconvinced by the arguments you presented. – Nihlus (talk) 18:52, 1 October 2017 (UTC)
- Just a head's up Nihlus - users have been banned for making too many trivial edits. Or, rather, for ignoring the community's concerns regarding making those trivial edits. Perhaps the most notable is Betacommand: Wikipedia:Arbitration/Requests/Case/Betacommand 3. Arguments that users can turn off their watchlists, or just need to check a bot's edits the once, are not really seeing this issue from the perspective of the community. You have nothing to be concerned about in regards to this incident, but it is as well to pay attention to concerns from the community. SilkTork ✔Tea time 18:19, 1 October 2017 (UTC)
- In addition, the watchlist argument is a moot point. As I stated on my talk page:
- When I approved, I was under the impression, this was a high priority fix. I also didn't bother to leave the BRFA open for long since it was a straightforward task. It was registering as a high priority on Special:LintErrors.—CYBERPOWER (Chat) 14:57, 1 October 2017 (UTC)
Archiving live links - Redux
I knew I had seen a discussion about this before: it is at Wikipedia:Bots/Noticeboard/Archive 11#Archiving links not dead - good idea? Most of the discussants there (obviously mostly fans of bots) seemed to approve of archiving all the reference links in an article, even the live ones. Some of us less technically oriented editors think the practice can be damaging to articles. Recent example, which is the reason I am bringing it up: With this recent edit to the article Barack Obama, the IABot v1.5.2 archived 392 references, adding 74,894 bytes to the article, and increasing its already huge size by 22.6%, from 330,241 to 405,135 bytes. Is that really something that people here think is a good outcome? (The other editor reverted at my request.) Does the bot offer the option of archiving only the dead links, as some of us non-techie people have requested? --MelanieN (talk) 18:04, 17 September 2017 (UTC)
- Actually only rescuing dead links is the default behavior. That behavior you linked in the diff has to be requested by the user by checking a checkbox. The checkbox option clearly states that it's optional. As the tool interface's ToS states, edits made on behalf of the user, is the responsibility of the user.—CYBERPOWER (Around) 19:04, 17 September 2017 (UTC)
- Thanks for the information. Is there a more appropriate place to discuss whether people should choose that option or not? --MelanieN (talk) 00:39, 18 September 2017 (UTC)
- @MelanieN: I am not certain. The best place for this is maybe here or WT:LINKROT.—CYBERPOWER (Chat) 16:37, 20 September 2017 (UTC)
- If you start a discussion at LINKROT, please post a reminder here. I thought that this page's archived discussion, referenced above, showed considerable support for not adding links to archives unless the original URLs have died. I, too, have been encountering continuing massive, useless additions of such links. Where it adds little text to a small article, I don't revert. However, adding links in the hundreds, with increased byte counts in the high five figures, to an article with high readership, such as the example given above, I do my best to combat it. At Barack Obama, 3/4 of the citations were given useless added cruft at a cost of 75 kbytes. This has got to stop. Dhtwiki (talk) 23:22, 21 September 2017 (UTC)
- @MelanieN: I am not certain. The best place for this is maybe here or WT:LINKROT.—CYBERPOWER (Chat) 16:37, 20 September 2017 (UTC)
- Thanks for the information. Is there a more appropriate place to discuss whether people should choose that option or not? --MelanieN (talk) 00:39, 18 September 2017 (UTC)
@Dhtwiki and Cyberpower678: I posted a question about this at Wikipedia talk:Link rot#Using a tool to archive live links. (Although given that project's enthusiasm about archiving, I wonder if that was kind of like asking an insurance salesman if I need more insurance!) --MelanieN (talk) 15:12, 4 October 2017 (UTC)
- LOL :D—CYBERPOWER (Chat) 15:14, 4 October 2017 (UTC)
- On second thought, maybe I should have taken it to a more general and more visible forum such as Village Pump? --MelanieN (talk) 15:19, 4 October 2017 (UTC)
- Talked myself into it. The discussion is now at Wikipedia:Village pump (miscellaneous)/Archive 56#Using a tool to archive live links. --MelanieN (talk) 15:39, 4 October 2017 (UTC)
Discussion related to double redirect-fixing bots
There is a discussion at Wikipedia talk:Double redirects#The bots should operate with a delay where the input of bot operators, particularly those who operate bots which fix double redirects, would be useful. Note that the section contains multiple ideas (not just the one in the section title), but not yet any firm proposals. Thryduulf (talk) 16:44, 9 October 2017 (UTC)
AutoEd and trivial changes, at least one of which is against guidelines
The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
Please see Wikipedia talk:AutoEd#Two bad edit types: changing spaced en dashes to unspaced em dashes, against MOS:DASH; changing page and other numeric ranges like 1901–1911 to 1901–11, which is against the spirit if not letter of MOS:NUM. The fact that the rule to full numbers was only applied to the "Dates" section at MOSNUM is an oversight, which has been fixed (I expect the fix to stick, because all the reasoning about date ranges also applies to other ranges). The rest of what this tool is doing needs closer examination under the style guidelines and WP:COSMETICBOT. — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 23:39, 10 October 2017 (UTC)
- @SMcCandlish: the example edit was not made by a bot, so this isn't really the best venue to get a resolution. — xaosflux Talk 00:38, 11 October 2017 (UTC)
- See response at the original thread to avoid forking. – Jonesey95 (talk) 02:48, 11 October 2017 (UTC)
WP 1.0 bot and Draft: links
Wikipedia_talk:Version_1.0_Editorial_Team/Index#Draft_links may be of interest. If noone more experimented with Wikimedia bots can look into it, I could ultimately, but I have not yet looked for its code and have no experience with the APIs involved yet. The previous maintainers are inactive. Thanks, —PaleoNeonate – 01:21, 11 October 2017 (UTC)
- I have notified the operators, lets see if they reply. If the operators do not reply the only direct options are live with the errors, or block the bot - a new operator could take over in the future. — xaosflux Talk 01:33, 11 October 2017 (UTC)
- This reminds me that there appears to be a problem with notifications; it'll likely be a good idea to ping again those involved once this issue is solved. Thanks, —PaleoNeonate – 11:31, 11 October 2017 (UTC)
- I left direct talk page messages, and emails. — xaosflux Talk 12:14, 11 October 2017 (UTC)
- This reminds me that there appears to be a problem with notifications; it'll likely be a good idea to ping again those involved once this issue is solved. Thanks, —PaleoNeonate – 11:31, 11 October 2017 (UTC)
This may interest many of you. Please comment. Headbomb {t · c · p · b} 12:29, 11 October 2017 (UTC)
AvicBot and G13
AvicBot used to list pages from Category:Candidates for speedy deletion as abandoned AfC submissions at User:AvicBot/AfCCSD. However, apparently the list stopped being updated because the category was renamed to "Candidates for speedy deletion as abandoned drafts or AfC submissions" per an RfC that expanded the scope of G13. The bot needs to be updated to use the new category name to prevent the list from being permanently empty. GeoffreyT2000 (talk, contribs) 23:15, 5 October 2017 (UTC)
- Have you tried asking @Avicennasis: about this? — xaosflux Talk 00:42, 6 October 2017 (UTC)
- @GeoffreyT2000: Fixed Thanks! Avicennasis @ 05:20, 26 Tishrei 5778 / 05:20, 16 October 2017 (UTC)
Portal: Current Event origin bot is not working
- Portal: Current Event origin bot is not working - normally the bot creates the template for the day as it did here
- but as of OCT 1, 2017 a new daily template has been being made and now it is creating an end space on the Portal page each day - I tried to fix it for just two days and was successful, BUT, now those two days don't agree with the daily portal of the others since Oct 1, 2017--2600:8800:FF0E:1200:F4A0:C59D:9AC4:A409 (talk) 12:39, 12 October 2017 (UTC)
- this is the page where the error output is visible to the wiki reader: Portal:Current_events
- these are the changes I tried October 10 and October 11 --2600:8800:FF0E:1200:F4A0:C59D:9AC4:A409 (talk) 12:49, 12 October 2017 (UTC)
- these were the fixes I tried October 11 October 10 --2600:8800:FF0E:1200:F4A0:C59D:9AC4:A409 (talk) 12:55, 12 October 2017 (UTC)
- to see the error - revert my two fixes above and then watch the error result of an extra blank line on each at this page: Portal:Current_events --2600:8800:FF0E:1200:F4A0:C59D:9AC4:A409 (talk) 13:01, 12 October 2017 (UTC)
- I have reverted my changes - I see now what error the bot is doing to create the extra blank line - numbers of days in the old bot had 09, 05, 03 but the new bot has those numbers as single digits such as 9, 5, 3 - the problem arises when a date now requires two digits beginning with 10 and so on - thus, the problem was silent until the first time a day of the 10th day of the month was created - the bot creates a blank line at the end of the subsequent day's entry--2600:8800:FF0E:1200:F4A0:C59D:9AC4:A409 (talk) 00:45, 13 October 2017 (UTC)
- to see the error - revert my two fixes above and then watch the error result of an extra blank line on each at this page: Portal:Current_events --2600:8800:FF0E:1200:F4A0:C59D:9AC4:A409 (talk) 13:01, 12 October 2017 (UTC)
- these were the fixes I tried October 11 October 10 --2600:8800:FF0E:1200:F4A0:C59D:9AC4:A409 (talk) 12:55, 12 October 2017 (UTC)
- these are the changes I tried October 10 and October 11 --2600:8800:FF0E:1200:F4A0:C59D:9AC4:A409 (talk) 12:49, 12 October 2017 (UTC)
- this is the page where the error output is visible to the wiki reader: Portal:Current_events
- Hi A409, are you referring to page creations by User:Cyberbot_I? If you have you tried contacted the operator, cyberpower678? — xaosflux Talk 01:29, 13 October 2017 (UTC)
- @Cyberpower678: - help please--2600:8800:FF0E:1200:F4A0:C59D:9AC4:A409 (talk) 04:20, 13 October 2017 (UTC)
- So what is the problem here? I'm lost.—CYBERPOWER (Chat) 13:12, 13 October 2017 (UTC)
- @Cyberpower678: If I am understanding correctly, the spaces in-between the different days/boxes on Portal:Current events/Inclusion is caused by recent changes. I haven't tracked down those changes, but eliminating the empty lines between the different
{{Portal:Current events/{{#time:Y F j|{{{1|{{CURRENTYEAR}}}}}-{{{2|{{CURRENTMONTH}}}}}-{{{3|{{CURRENTDAY}}}}}}}}}
should solve the problem. It's fully protected, otherwise I would do it. Nihlus 15:48, 13 October 2017 (UTC)
- @Cyberpower678: If I am understanding correctly, the spaces in-between the different days/boxes on Portal:Current events/Inclusion is caused by recent changes. I haven't tracked down those changes, but eliminating the empty lines between the different
The extra lines have been removed. -- John of Reading (talk) 06:02, 23 October 2017 (UTC)
Data mining and use
Hello there, I was wondering if anyone could direct me as to how to get started using and mining Wikipedia database dumps? I have downloaded the latest pages-articles.XML.bz2 version. The goal is to mine for a particular string in order to figure out the relative need for a bot and to build a list of pages that would need to be edited if the string is present within the namespace. (Xaosflux sent me here). Thank you for your help. --TheSandDoctor (talk) 16:15, 24 October 2017 (UTC)
- @TheSandDoctor: AutoWikiBrowser has a database scanner in it that can utilize those XML files. You can also use Help:CirrusSearch for some regex phrases as well, as that is built into the site software. Nihlus 18:19, 24 October 2017 (UTC)
- Thank you for the ping Nihlus as well as for pointing me in the right direction. While I am not yet ready to file a BRFA (just discovered some minor kinks to work out in bot's code/need to make it slightly more intelligent), I now have an idea roughly how many pages will be affected so that I can fill out that field when it is time for the BRFA and I have been able to compile a list of articles that would need changes (so a win-win-win) . --TheSandDoctor (talk) 19:27, 24 October 2017 (UTC)
- BRFA filed Thanks again for your help Nihlus, BRFA has now been filed. --TheSandDoctor (talk) 04:04, 26 October 2017 (UTC)
- Thank you for the ping Nihlus as well as for pointing me in the right direction. While I am not yet ready to file a BRFA (just discovered some minor kinks to work out in bot's code/need to make it slightly more intelligent), I now have an idea roughly how many pages will be affected so that I can fill out that field when it is time for the BRFA and I have been able to compile a list of articles that would need changes (so a win-win-win) . --TheSandDoctor (talk) 19:27, 24 October 2017 (UTC)
G13 helper scrit/semi automated deletion
Hey, all, I was thinking of writing a script to automatically assess G13 speedy deletion requests, after an influx of them today. Basically, I'd write a script that would automatically scan Category:Candidates_for_speedy_deletion#Pages_in_category where, for each page there in the Draft namespace, check to see if the CSD nomination is G13 (probably by testing for inclusion in Category:Candidates for speedy deletion as abandoned drafts or AfC submissions and the second-most recent edit (i.e. the edit before the nomination) is more than 6 months prior, and if so, provide a deletion link on the page. But I don't know if such assistance is too close to MEATBOT-like automation, especially given the use of admin tools, so I figured I'd ask here first in case people think that would need some kind of approval. I figure G13 is low-impact enough (not article space, free refund) and has a simple enough inclusion criteria that it isn't a big deal. Any thoughts? Writ Keeper ⚇♔ 18:05, 27 October 2017 (UTC)
- You should advertise at WT:CSD and WT:MFD (discussion is prob best had at WT:CSD but could be here as well). — xaosflux Talk 19:03, 27 October 2017 (UTC)
- Writ Keeper, are you writing a script to delete the page, or simply add a "delete this page" link in the G13 notice? Because I was under the impression Twinkle did the first and the template did the second. In other words, I think I'm badly misinterpreting what you're asking... Primefac (talk) 12:09, 28 October 2017 (UTC)
IABot v1.6
I thought I would point everyone to the significance of v1.6 of IABot.
https://github.com/cyberpower678/Cyberbot_II/pull/46 —CYBERPOWER (Trick or Treat) 21:46, 28 October 2017 (UTC)
Helping the vandals do their work
A provocative title for a cautionary tale. Please see User_talk:Ladsgroup#Helping the vandals do their work for the details of a minor episode of janitors cleaning up the crime scene too quickly / early / inappropriately. Shenme (talk) 23:34, 22 October 2017 (UTC)
- This isn't the bot's fault. It did it's job correctly. The fault lies with the user who failed to revert the vandal all the way. Obviously it was a mistake on the user's part, but you can't blame the bot for something it was programmed to do simply because someone else didn't clean up the mess entirely without noticing.—CYBERPOWER (Trick or Treat) 02:08, 23 October 2017 (UTC)
- It often happens that inexpert vandal fighters, bots, and editors who make edits without even noticing the vandalism make it harder to undo, rollback, or easily restore the last good version; but that shouldn't ever really make the vandalism "permanent". The altered or lost text can always be restored. It sometimes just takes more investigation, which should give the investigating editor a better grasp of what makes the article work. So, the time spent doing that shouldn't be regarded as a complete waste, and the vandalism as even having contributed to bettering the article. Dhtwiki (talk) 06:40, 23 October 2017 (UTC)
- To be honest, I feel IP editors are becoming more and more malicious. The ratio of good IPs and bad ones, is leaning more and more towards bad. At this point I feel like we have more disruptive IPs that productive ones. IMO, we should disable IP editing and require the registration of accounts to edit. It would still be the encyclopedia any one can edit, but you just need to register first. This would also go a long way to counteracting block evasion and sockpuppetry.—CYBERPOWER (Trick or Treat) 13:35, 23 October 2017 (UTC)
- My recent estimation of IPs is they very often do expert work. My worst experience was with the IP represented by this account, whose persistence, combined with dynamic addressing, could have been tamped down by aggressive range blocking. In the example given here, putting IP edits under automatic pending review status if, as here, there are large changes without edit summaries, might be doable. There might be the possibility of rollbacks to edits that aren't most recent, etc. If there are hard-case IPs that do everything to circumvent restrictions placed on them, would banning IP editing help? How would it stop sockpuppetry via registered acounts combined with ISP hopping? Dhtwiki (talk) 22:09, 30 October 2017 (UTC)
- Well there is the recent cookie block system that was implemented. Familiar hoppers that have a block on their browser cookie are still blocked. If IP editing was disallowed, we could ramp up the aggressiveness a little by auto-hard blocking the IP address the hopper ends up on because of their cookie. This in turns may flush out possible sock puppets. In addition to that, we could administrators another shiny new button. I don't even think we'd need a CU for it. "Block accounts on this IP address." If an administrator clicks it, it gives the potential of swiftly blocking the sleeper accounts if used correctly without having to spill the actual IP address. Just my thoughts.—CYBERPOWER (Trick or Treat) 23:14, 30 October 2017 (UTC)
- Using cookies sounds like something that's easily circumventable, as cookie files are easily and frequently edited by their users. What is the feasibility of putting problem IP edits up for review, as is presently done on a per-article basis? The last time I saw a discussion of preventing IP editing altogether, I got the impression that IP editing is practically a fundamental tenet of Wikipedia, at least for some people. Dhtwiki (talk) 00:00, 7 November 2017 (UTC)
- Well there is the recent cookie block system that was implemented. Familiar hoppers that have a block on their browser cookie are still blocked. If IP editing was disallowed, we could ramp up the aggressiveness a little by auto-hard blocking the IP address the hopper ends up on because of their cookie. This in turns may flush out possible sock puppets. In addition to that, we could administrators another shiny new button. I don't even think we'd need a CU for it. "Block accounts on this IP address." If an administrator clicks it, it gives the potential of swiftly blocking the sleeper accounts if used correctly without having to spill the actual IP address. Just my thoughts.—CYBERPOWER (Trick or Treat) 23:14, 30 October 2017 (UTC)
- My recent estimation of IPs is they very often do expert work. My worst experience was with the IP represented by this account, whose persistence, combined with dynamic addressing, could have been tamped down by aggressive range blocking. In the example given here, putting IP edits under automatic pending review status if, as here, there are large changes without edit summaries, might be doable. There might be the possibility of rollbacks to edits that aren't most recent, etc. If there are hard-case IPs that do everything to circumvent restrictions placed on them, would banning IP editing help? How would it stop sockpuppetry via registered acounts combined with ISP hopping? Dhtwiki (talk) 22:09, 30 October 2017 (UTC)
- To be honest, I feel IP editors are becoming more and more malicious. The ratio of good IPs and bad ones, is leaning more and more towards bad. At this point I feel like we have more disruptive IPs that productive ones. IMO, we should disable IP editing and require the registration of accounts to edit. It would still be the encyclopedia any one can edit, but you just need to register first. This would also go a long way to counteracting block evasion and sockpuppetry.—CYBERPOWER (Trick or Treat) 13:35, 23 October 2017 (UTC)
Flicker 0056
Is this a legit bot? I don't recall any BRFAs for it... CHRISSYMAD ❯❯❯¯\_(ツ)_/¯ 13:48, 9 November 2017 (UTC)
- It was just a new user experimenting. I have boldly removed some of the garbled syntax and false bot/admin claims. – Jonesey95 (talk) 13:56, 9 November 2017 (UTC)
The 2017 Community Wishlist Survey is up for proposals (November 6-19). You can make proposals and comment on stuff to help the technical collaboration review and organize the proposals, but the larger community input will happen from Nov 27–Dec 10.Headbomb {t · c · p · b} 15:12, 8 November 2017 (UTC)
- Filed my proposal here.—CYBERPOWER (Chat) 16:06, 8 November 2017 (UTC)
- Mine's here. Headbomb {t · c · p · b} 18:35, 9 November 2017 (UTC)
- This entry would be useful for bot makers. A search using Elasticsearch takes < 1 minute compared to minutes / hours with database dumps via AWB. -- GreenC 21:05, 9 November 2017 (UTC)
InternetArchiveBot notices about nothing but archive-url additions
- InternetArchiveBot (BRFA · contribs · actions log · block log · flag log · user rights)
- InternetArchiveBot (BRFA · contribs · actions log · block log · flag log · user rights) (Task: 2)
Where's the place to "seek out a community discussion" on "a community bot" (whatever that means) about getting a bot to stop leaving a particular kind of pointless message? The InternetArchiveBot does various things, and even leaves some helpful messages, but when it leaves a note on an articles talk page that all it did was provide an archive-url to a cite that didn't have one, this is pointless, annoying bot-spam. We don't need to know that it did something trivial that no one sane would question, and we already know – anyone watching the article already saw the edit, so now they're getting a second watchlist hit for the same thing for no reason.
I went to the bot's talk page, and it isn't editable except by admins. I got to the author/operator's page, which directed me to file a ticket about it a Phabricator. So I did [17]. The response to that was a testy "The bot is currently approved to run with these message.", which is a silly thing to say. All the bots are approved to do what they do or their operator would be in trouble and the bot would be blocked. I was told "The last discussion regarding them had no consensus for change", which means it has been discussed before and other people are tired of these messages, too. "If you feel the bot should stop leaving messages, please seek out a community discussion. This is a community bot". I see a bot requests page, which seems to be only for asking for bots to do stuff not to stop doing them, and isn't really a discussion page; and the noticeboard, which appears to be for reporting bugs and policy violations.
So, I'm not really sure what the process or venue is. PS: This isn't about ALL InternetArchiveBot notices, just the the no-one-will-care pointless ones. — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 12:50, 4 October 2017 (UTC)
- @SMcCandlish: Can you please provide some example Diffs below of the edits you have concerns with? — xaosflux Talk 12:58, 4 October 2017 (UTC)
- Sure, any of these [18]. The bot leaves various messages we do care about, but this one is just wrong. Its instructions that we need to go look at what it did when all it did is add an archive URL (and the same page watchers already saw it do that) makes no sense. (Same goes if it did that and also marked a dead original URL as such). We do want to go look when the bot flags actual problems. — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 14:26, 4 October 2017 (UTC)
- And to answer your other question, per Wikipedia:Bot_policy#Appeals_and_reexamination_of_approvals, this is the approriate venue to reexamine bot task approvals if your are at an impasse with the operator. — xaosflux Talk 13:10, 4 October 2017 (UTC)
- WP:VPP would be a good place to discuss. You are proposing a change to IABot after all. I am more than happy to make any changes, but given the runtime of IABot, the changes should have a consensus.—CYBERPOWER (Chat) 13:20, 4 October 2017 (UTC)
- If you think proposing a minor change is an "appeal", then ok. I also don't think this needs any bureaucracy; it's just a common sense matter. — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 14:26, 4 October 2017 (UTC)
- VPR would probably be better - this doesn't appear to be about changing policy, or at least a link there to here - my initial concern would be if there are out-of-scope edits being made (it does not sound like that is occurring). I agree if this is really just a request to change the currently approved scope it needs to have general community consensus measured. — xaosflux Talk 13:48, 4 October 2017 (UTC)
- I was going to guess VPT, but whatever. Why does this need to be a big process at all, though? "Stop spamming us with double watchlist hits" isn't something we really need to hash over at length is it? :-) Anyway, thats' four separate venues suggested so far (VPR, VPP, bot appeals, and this board). — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 14:26, 4 October 2017 (UTC)
- (edit conflict)Because other users do not see it as spam. They see it as a meaningful message with a quick link to verify that the bot placed a meaningful archive there, as well as a quick message on how to best deal with bot mistakes.—CYBERPOWER (Chat) 14:29, 4 October 2017 (UTC)
- We can make a big thread out of it if you want, but see the thread immediately above this one. The argument that watchlist spamming is no big deal has been firmly shot down by the community. — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 14:33, 4 October 2017 (UTC)
- (edit conflict)No argument from me that watchlist spamming is a problem, however, up above was a bot task I approved under the misconception that it was a high priority task, when it turns out it wasn't VS a task that has been approved and been in operation for 2 years doing what it does. Plus it's not really spamming a person's watchlist unless they have every article on Wikipedia watchlisted.—CYBERPOWER (Chat) 14:38, 4 October 2017 (UTC)
- I understand the nature of the thread above; what I was referring to was, specifically, the admonition to take community concerns about pointless watchlist-hitting seriously. — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 15:15, 4 October 2017 (UTC)
- Argument from me. There are tools that have been built for the supposed watchlist spamming; "I don't want to use it" is not a valid opposing argument. What I will agree with is the talk page spamming itself, as sometimes they can get inundated with bot messages. I don't have a strong opinion of it in either direction though since it does provide a better avenue to check the edit's validity. Nihlus 14:52, 4 October 2017 (UTC)
- Much of that sounds like exactly the reasoning that was just shot down again the thread above this one. — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 15:16, 4 October 2017 (UTC)
- @SMcCandlish: It is my view on the matter and has not been "shot down" in any capacity. Please, learn how to politely disagree with someone. Nihlus 16:51, 4 October 2017 (UTC)
- You also reacted with offense when someone in that thread gave you sound advice and a case to examine for why the advice was sound, then you continued to react with umbrage when it was suggested you were misinterpreting the advice as some kind of personal insult. So, I'll pass on this game. — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 22:59, 4 October 2017 (UTC)
- @SMcCandlish: It is my view on the matter and has not been "shot down" in any capacity. Please, learn how to politely disagree with someone. Nihlus 16:51, 4 October 2017 (UTC)
- Much of that sounds like exactly the reasoning that was just shot down again the thread above this one. — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 15:16, 4 October 2017 (UTC)
- (edit conflict)No argument from me that watchlist spamming is a problem, however, up above was a bot task I approved under the misconception that it was a high priority task, when it turns out it wasn't VS a task that has been approved and been in operation for 2 years doing what it does. Plus it's not really spamming a person's watchlist unless they have every article on Wikipedia watchlisted.—CYBERPOWER (Chat) 14:38, 4 October 2017 (UTC)
- We can make a big thread out of it if you want, but see the thread immediately above this one. The argument that watchlist spamming is no big deal has been firmly shot down by the community. — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 14:33, 4 October 2017 (UTC)
- (edit conflict)Because other users do not see it as spam. They see it as a meaningful message with a quick link to verify that the bot placed a meaningful archive there, as well as a quick message on how to best deal with bot mistakes.—CYBERPOWER (Chat) 14:29, 4 October 2017 (UTC)
- I was going to guess VPT, but whatever. Why does this need to be a big process at all, though? "Stop spamming us with double watchlist hits" isn't something we really need to hash over at length is it? :-) Anyway, thats' four separate venues suggested so far (VPR, VPP, bot appeals, and this board). — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 14:26, 4 October 2017 (UTC)
Here is an example of the type of diff SMcCandlish is referring to. It is a notification placed immediately after this edit was made. Primefac (talk) 14:35, 4 October 2017 (UTC)All righty then. Primefac (talk) 14:47, 4 October 2017 (UTC)- That's not actually an example, since it has other notices in it, including a claim to have corrected two URLs, which someone may want to check. The example I provided above is, well, an example of what I mean. Here it is again: [19]. — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 14:43, 4 October 2017 (UTC)
- @Pyxis Solitary:: pinging user who's report about this issue at WT:TPG inspired this change request in the first place. PS: Since apparently any of four venues will work and this is one of them, I think we can consider this the demanded discussion being open. — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 14:39, 4 October 2017 (UTC)
- Thank you. Pyxis Solitary talk 05:27, 5 October 2017 (UTC)
- Clearer statement of the issue: The bot's "I added an archive-url" notice (that the bot did something useful, routine, and virtually unbreakable) is directly equivalent to the watchlist notice of the edit itself, and the talk edit makes another watchlist hit, so that's three notifications about an edit that will never be dangerous. In the particular case of "notify upon archive-url", it grossly violates the spirit though not the exact letter of WP:COSMETICBOT – not in making the actual archive url edit, but in pestering us about it. The entire reason we have the COSMETICBOT rule is the pestering effect, and just the watchlist hits alone were annoying enough to cause this rule to be adopted. Now add talk page spamminess, which impedes talk page usability, wastes editors' time, increases talk page archival maint. overhead, etc. Again, I want to stress that this not about IAB notices that may actually require human review/intervention. Still want those.
Simple pseudocode fix:
if $CHANGESBOTMADE == ($ARCHIVEURL or ($ARCHIVEURL + $DEADURLYES)) then $POSTABOUTIT = no
– i.e., if it's done anything at all other than that trivia (including that trivia and something non-trivial), then go ahead and post a notice.
— SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 15:04, 4 October 2017 (UTC), clarified 23:05, 4 October 2017 (UTC)- That is not how I interpret cosmeticbot at all. Cosmeticbot applies to cluttering watch lists with changes that render no visual output change to page. IABot is making visible changes to the page. COSMETICBOT does not apply, IMO.—CYBERPOWER (Chat) 15:18, 4 October 2017 (UTC)
- My prediction that someone would focus on the exact wording rather than the underlying reasoning and intent of the rule is exactly why I said "the spirit if not the exact letter of WP:COSMETICBOT". I've now clarified that with some emphasis in my earlier post. — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 23:03, 4 October 2017 (UTC)
- That is not how I interpret cosmeticbot at all. Cosmeticbot applies to cluttering watch lists with changes that render no visual output change to page. IABot is making visible changes to the page. COSMETICBOT does not apply, IMO.—CYBERPOWER (Chat) 15:18, 4 October 2017 (UTC)
- For clarity, completely disagree: archivebot's talk page messages are fine as far as I'm concerned. In other words SMcC has no consensus whatsoever on this. See prior (still open) discussion at Wikipedia talk:Talk page guidelines#Deleting bot notices. I'd suggest to close this Bots noticeboard thread for obvious forumshopping. Anyway, I'm not prepared to discuss this same issue in two different places at the same time. --Francis Schonken (talk) 15:23, 4 October 2017 (UTC)
- It isn't forum shopping for the simple reason that the topic arose at a relatively narrow locale Talk:TPG and @SMcCandlish: correctly sought wider input via an appropriate noticeboard, and left a pointer at the original thread saying he had done so. I do the same thing, though I usually mark the original thread closed to prevent this sort of misperception/accusation. I also note there was a couple hours or so between opening this thread and later adding the pointer. I try to post both in quick succession to further reduce avoidable controversy. NewsAndEventsGuy (talk) 16:34, 4 October 2017 (UTC)
- I'd forgotten about the other thread; that's why the ping to the original raiser of the issue came so late as well. Whether consensus will emerge to make a minor change to this bot's output will be determined by the discussion; it isn't, as Francis seems to suggest, a necessary precondition for the discussion to happen. And no, obviously not forum shopping, since WT:TPG isn't a venue for proposing bot changes (even if respondents to this thread aren't entirely clear what is a good venue for this kind of request). Francis and I have had an on-again-off-again personality dispute dating back to the 2000s, and I've learned not to react much to these jabs from him. PS: The two discussions are actually distinct: this is is about whether to tweak the bot's messaging; the TPG one is about whether to archive or just delete the bot messages when they're old (the off-topic part of it has been closed and pointed at this discussion). — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 22:28, 4 October 2017 (UTC)
- It isn't forum shopping for the simple reason that the topic arose at a relatively narrow locale Talk:TPG and @SMcCandlish: correctly sought wider input via an appropriate noticeboard, and left a pointer at the original thread saying he had done so. I do the same thing, though I usually mark the original thread closed to prevent this sort of misperception/accusation. I also note there was a couple hours or so between opening this thread and later adding the pointer. I try to post both in quick succession to further reduce avoidable controversy. NewsAndEventsGuy (talk) 16:34, 4 October 2017 (UTC)
- All of the proposed alternative venues have been notified of this discussion, to centralize. — SMcCandlish ☏ ¢ >ʌⱷ҅ᴥⱷʌ< 22:51, 4 October 2017 (UTC)
I am the editor that created the discussion in Wikipedia talk:TPG about bot notices in talk pages. In regards to the InternetArchiveBot and its announcement about modification of external links: what's the point of keeping these notices on an article's talk page after an editor has checked the links and found them okay? Pyxis Solitary talk 05:27, 5 October 2017 (UTC)
I for one find the InternetArchiveBot notices useful. Though the bot is getting better, it doesn't always pick the right archive URL and sometimes misclassifies links as dead; it's also quite possible that a link that it detects needs manual updating. The talk page notices, which as far as I know only show up when the bot adds a (possibly invalid) archive URL, are a useful way of keeping track of what it does, and serve to show that a human has indeed OK'd the bot's changes. The notices also serve as a handy way to get to the bot's interface, which I've used several times. Graham87 08:05, 5 October 2017 (UTC)
- If an editor checks the modification(s) by the archive bot and finds them to be okay, the value in the bot notice is changed to "true" which indicates that the modifications were correct. If the editor finds a problem, the value is changed to "false".
So ... if the modifications are reviewed and checked as being okay ... what's the point of keeping the notice on the talk pages? (I changed the indentation of my comment directly above yours because it's easily overlooked.) Pyxis Solitary talk 04:05, 6 October 2017 (UTC)- If the editor finds a problem, the value isn't changed to false ... the checked= value only notes whether the links have been checked or not. As for keeping the notices on the pages, I think they can be treated as any other discussion ... and archived if necessary. I feel a bit uneasy about the idea of removing them, as I do for any talk page message ... but I probably have a more extreme view that talk pages should be a record of all discussions than most of the Wikipedia community. Graham87 15:45, 6 October 2017 (UTC)
- The "checked" value can be changed to "failed", if some links are found wanting. In the not too distant past, when I was constantly checking the bot's output that affected articles on my watchlist, I used that value as well as listing the failed links underneath. My practice must have been an outlier, but it was due to the prompting of the talk page message that I checked so carefully, and found much that needed checking, before CyberbotII/IABot improved considerably. In any case, I am someone else who thinks the bot's talk page messages helpful, although they might seem verbose, especially when so many of them do go unchecked, and perhaps are less needed as the bot and the internet archives themselves improve. Dhtwiki (talk) 05:41, 7 October 2017 (UTC)
- What's disturbing is that I'm still getting false positives. Hawkeye7 (discuss) 05:52, 7 October 2017 (UTC)
- False positive in that IABot declares a link dead when it isn't, or that it thinks an archive snapshot is useful when it isn't? Both can happen and, I think, are both instances of false positives. Dhtwiki (talk) 21:28, 7 October 2017 (UTC)
- Both are happening. The former is still too common. It happens when the Bot thinks a site is down but it isn't. It used to occur when a site went down temporarily, but now we're getting into weird cases where the Bot cannot connect but apparently the rest of us can. This is usually the Bot's fault, but not always; in one recent case a site was returning at HTML error code but still rendering the page okay. The second is less common but does happen; usually it is the internet archive's fault. Hawkeye7 (discuss) 21:15, 17 October 2017 (UTC)
- False positive in that IABot declares a link dead when it isn't, or that it thinks an archive snapshot is useful when it isn't? Both can happen and, I think, are both instances of false positives. Dhtwiki (talk) 21:28, 7 October 2017 (UTC)
- What's disturbing is that I'm still getting false positives. Hawkeye7 (discuss) 05:52, 7 October 2017 (UTC)
- The "checked" value can be changed to "failed", if some links are found wanting. In the not too distant past, when I was constantly checking the bot's output that affected articles on my watchlist, I used that value as well as listing the failed links underneath. My practice must have been an outlier, but it was due to the prompting of the talk page message that I checked so carefully, and found much that needed checking, before CyberbotII/IABot improved considerably. In any case, I am someone else who thinks the bot's talk page messages helpful, although they might seem verbose, especially when so many of them do go unchecked, and perhaps are less needed as the bot and the internet archives themselves improve. Dhtwiki (talk) 05:41, 7 October 2017 (UTC)
- If the editor finds a problem, the value isn't changed to false ... the checked= value only notes whether the links have been checked or not. As for keeping the notices on the pages, I think they can be treated as any other discussion ... and archived if necessary. I feel a bit uneasy about the idea of removing them, as I do for any talk page message ... but I probably have a more extreme view that talk pages should be a record of all discussions than most of the Wikipedia community. Graham87 15:45, 6 October 2017 (UTC)
Agree that these posts are a waste of time and bandwidth I notice ClueBot NG doesn't do the same thing whenever it reverts vandalism. It simply leaves a link in the edit summary asking others to report false positives. I don't see why something similar can't be implemented here - in the example SMC provides, the summary for the edit the bot is referring to simply reads, "Rescuing 1 sources and tagging 0 as dead. #IABot (v1.5.4)". There's plenty of space in there for a link like ClueBot NG leaves. It's one thing to alert users to edits like these, but there's a better way to do it, if it needs to be done at all. Zeke, the Mad Horrorist (Speak quickly) (Follow my trail) 14:13, 7 October 2017 (UTC)
- One difference between ClueBot NG and IABot is that the former bot (usually) decides on the basis of, and reports, vandalism that both fits on one screen and whose vandalistic attributes are immediately apparent (e.g. article text replaced by the word "poopy"). IABot is apt to report many decisions per edit, with changes to text that are apt to be widely strewn throughout the article, and whose validity isn't readily apparent. Therefore, IABot's talk page messages bring a needed synopsis that ClueBot NG usually doesn't need. The idea of having a reporting link, though, is a good one, as I'm not sure that reporting failed links to the talk page ever served well as feedback. Dhtwiki (talk) 21:43, 7 October 2017 (UTC)
- Agree that it may be worth checking IABot's archive links, so talk page message is not spam. Also the changes should be very close together in time, so should not result in multiple watchlist "events". All the best: Rich Farmbrough, 20:28, 17 October 2017 (UTC).
- At present the error rate for the IABot is low, but still too high to trust it, so its actions still really do need checking. Hawkeye7 (discuss) 21:15, 17 October 2017 (UTC)
- Close as Working as intended. The bot does have misfires from time to time, so putting the summary on the talk page (to try and flag down human attention) is appropriate. However once the report has been reviewed by a human and corrected, there's no need for the post on the talk page any more, so it can safely be archived. This seems like a rule creep that is going to open a lot of worm cans that would be best left alone. Hasteur (talk) 02:21, 18 October 2017 (UTC)
- Most reports aren't checked. Even I don't do that any more. And when I did do it, I'd leave notes as a reply, sometimes to encourage further checking. So, no need to archive these messages, at least not faster than normal talk page archiving. Dhtwiki (talk) 22:37, 19 October 2017 (UTC)
suggest WP:BOTAPPEAL for communications issues. ... when the bot was doing a straightforward edit the talk page seemed completely over the top. That is not to say the bot isn't useful and by the look of it working well. But I think there are a number WP:BOTCOMM issues.
- The bot's talk page message refers to itself 'I' thereby impersonating a human. This is irksome to at least me one discover its a bot.
- The bots asking to have its work checked ... but starts referring to external tools in wmflabs.org and not remaining in wikipedia.
- It refers to 'this template'. But if you edit you see this means 'source'.
- I dont like the fact this Bots/Noticeboard discussion was not mentioned on the bots talk page.
I think as it is a bot it would be better if it admitted to being a bot and gave precise instructions. Rather than [this diff] I think I'd prefer to see something along the lines of:
The Internet Archive BOT has made the following changes:
URL1 (dead) -> Archived URL
It would be helpful if the modifications can be manually reviewed and set checked=true in the sourcecheck template if the edit was successful or failed if not. For detailed information on InternetArchiveBot see **HOWTO** .. the HOWTO going back to a BOT page or subpage and ensuring the FAQ/HOWTO covered the case of manually checking BOT work first.
- After someone has checked it with checked=yes the template should simply say:An editor has reviewed this edit and fixed any errors that were found. if you find any issues DOTHIS. but do not leave information splatterd around as seems to happen at present when checked=yes
In summary the BOT looks to be doing some great work but I think its really tricky not to fall foul with WP:BOTCOMM and I think that area needs an improvement. It prefer it didn't make a talk page entry for simple edits but understand that *might* considered necessary.Djm-leighpark (talk) 23:07, 9 November 2017 (UTC)
- We're not at a stage of WP:BOTAPPEAL. Follow WP:BOTISSUE first. Headbomb {t · c · p · b} 00:30, 10 November 2017 (UTC)
- Apologies if I appear not to be following WP:BOTISSUE ... however did try to discuss on user homepage but told 'no consensus' which brings me to this noticeboard. Following WP:BOTISSUE and after reviewing this discussion I feel I have reasonable concerns for expressing this BOT no longer has concensus for its task due to bot communications edits on the article talk page. I suppose an alternative would be to hide the bot from my watchlist but that only partially solves the issue. I am against a close as working as intended as the BOT's communications are annoying me likely not what was intended ... and likely other community members as well. I was tempted to say agree these posts are a waste of time and bandwidth but I tried to have a hard look at what the edits were saying and am feeling they need to reorganise what they say in more clear and concise and different way that is not offputting. In essence I am strongly suggesting the BOT and sourcecheck template be changed in how they interface/interact on the article talk page and hope that might bring concensus. Thankyou.Djm-leighpark (talk) 05:51, 10 November 2017 (UTC)
- We're not at a stage of WP:BOTAPPEAL. Follow WP:BOTISSUE first. Headbomb {t · c · p · b} 00:30, 10 November 2017 (UTC)
Category:Opted-out of message delivery is now Category:Wikipedians who opt out of message delivery
Notification for anyone who uses that category in their bot. Jo-Jo Eumerus (talk, contributions) 11:00, 11 November 2017 (UTC)
Cluebot reversion of good-faith edits
I tried to start a discussion regarding Cluebot on the Cluebot talk page and my comments were archived by the bot without response. I'm concerned about Cluebot reverting good-faith edits, and the effect this may have on potential contributors.
Reading through the Cluebot pages and considering the lack of response, and rapid archiving, of my comment -- it is my feeling that discussions of this nature are not welcomed by the bot operator. It seems to me that the wider community ought to have a voice in how Cluebot is operated and should be entitled to review Cluebot's work on an ongoing basis and discuss the bot's settings and edits without having to fill out forms and have the discussion fragmented. I am concerned that the characterization of the 0.1% "false positive rate" used by the bot's proponents, though useful technically, belies the substantial number of good-faith edits this bot is reverting. Since it has been some years since the bot was approved, I think it's appropriate to review the work it is doing in light of the current editing climate and the evolution of the bot itself (and its settings) over the years.
At a minimum, I believe that the bot's operators and proponents have an obligation to take these concerns seriously enough to discuss them.
While mistaken reverts can be undone, the frustration they may cause to a well-meaning, fledgling contributor cannot.
The Uninvited Co., Inc. 19:52, 3 November 2017 (UTC)
- Seems Cobi (talk · contribs), the bot's owner, hasn't edited since July 2017. Someone may want to send him an email. Headbomb {t · c · p · b} 20:17, 3 November 2017 (UTC)
- In the meantime, did you report the false positive? Headbomb {t · c · p · b} 20:19, 3 November 2017 (UTC)
- (edit conflict × 2) @UninvitedCompany: There is a notice on that page that says to report false positives at User:ClueBot NG/FalsePositives and not on that page (this is also in every edit summary for the bot). That's how they track issues and make improvements to the coding of the bot. I see no reason to create a protracted discussion. Nihlus 20:20, 3 November 2017 (UTC)
- @UninvitedCompany: Re. Special:Diff/808585992: FWIW I'm not a bot operator, and the question you raised on the bot's talk page was best answered by an operator of the bot, hence why I did not respond. —k6ka 🍁 (Talk · Contributions) 22:26, 3 November 2017 (UTC)
(←) To answer your two specific questions:
How have the decisions been made over what edits the bot will revert?
— The Uninvited Co., Inc.- The bot uses an artificial neural network to score each edit, and the bot reverts at a threshold calculated to be less than 0.1% false positives. See User:ClueBot NG#Vandalism Detection Algorithm, User:ClueBot NG/FAQ#Why did ClueBot NG classify this edit as vandalism or constructive?, User:ClueBot NG/FAQ#I think ClueBot NG has too many false positives. What do I do about it?.
What is the best way to have an open discussion about the way this automation is being conducted and its effect on new contributors?
— The Uninvited Co., Inc.- By giving specific, actionable suggestions whose merits can be discussed and the community can come to a consensus.
-- Cobi(t|c|b) 23:03, 3 November 2017 (UTC)
To reply to your comments here:
I tried to start a discussion regarding Cluebot on the Cluebot talk page and my comments were archived by the bot without response. I'm concerned about Cluebot reverting good-faith edits, and the effect this may have on potential contributors.
— The Uninvited Co., Inc.
False positives are an unfortunate technical inevitability in any system that automatically categorizes user content. Human editors suffer from this as failing as well. The only thing that can be done is to figure out where the trade-off should be made. I am certainly open to discussing where that trade-off is, but as you haven't made a proposal yet, I am happy with where it currently is.
Reading through the Cluebot pages and considering the lack of response, and rapid archiving, of my comment
— The Uninvited Co., Inc.
It's the same 7 day archival period you have on your talk page. I was busy and your message at the time didn't appear particularly urgent in nature, and in the 7 days no one else had any thoughts on the matter and so the bot archived it.
it is my feeling that discussions of this nature are not welcomed by the bot operator.
— The Uninvited Co., Inc.
This is a hasty generalization.
It seems to me that the wider community ought to have a voice in how Cluebot is operated and should be entitled to review Cluebot's work on an ongoing basis and discuss the bot's settings and edits without having to fill out forms and have the discussion fragmented.
— The Uninvited Co., Inc.
Free-form discussion is encouraged on the bot's talk page. Or here.
I am concerned that the characterization of the 0.1% "false positive rate" used by the bot's proponents, though useful technically, belies the substantial number of good-faith edits this bot is reverting.
— The Uninvited Co., Inc.
False positive rates are used as standard metrics for any kind of automated classification system. <0.1% means less than one edit is falsely categorized as vandalism out of every thousand edits it examines.
Since it has been some years since the bot was approved, I think it's appropriate to review the work it is doing in light of the current editing climate and the evolution of the bot itself (and its settings) over the years.
— The Uninvited Co., Inc.
Review is always welcome so long as it comes with concrete, actionable changes of which the merits can be properly discussed. Pull requests are even better.
At a minimum, I believe that the bot's operators and proponents have an obligation to take these concerns seriously enough to discuss them.
— The Uninvited Co., Inc.
We do.
While mistaken reverts can be undone, the frustration they may cause to a well-meaning, fledgling contributor cannot.
— The Uninvited Co., Inc.
Of course, but that is hard to measure objectively. Do you have any good metrics on the frustration caused to well-meaning, fledgling contributors? I'd love to see that data, and be able to tweak things to help those metrics go in the direction we want. -- Cobi(t|c|b) 23:39, 3 November 2017 (UTC)
- I sense an attitude that the bot is essentially part of "settled policy" and the burden of change falls upon the shoulders of those individuals raising concerns. I don't think that's appropriate for any bot, let alone one that is so prolific, wide-ranging, and discretionary in what it does. I don't see where there has ever been any informed consent by the editing community at large that the tradeoffs made in the design of the bot are appropriate, let alone any ongoing discussion as the bot has evolved.
- In response to your question, I did report the edit using the interface provided.
- The fact that the "false positive rate" is a standard metric for systems with similar architecture does not mean that it is the most appropriate or only metric that should be used in community discussion of the bot's performance. I think it would be valuable for the community and the bot operators/designers alike to be aware of other metrics such as the number of good-faith edits reverted by the bot per unit time. It would be interesting to see whether that figure matches the projection one might make using the theoretical false positive rate and the gross reverts per unit time. The Uninvited Co., Inc. 18:01, 6 November 2017 (UTC)
- Absolute numbers are not useful, that's why we discuss error rate, which includes both false positives and false negatives. Your discussion does not include the latter. There is a balance between reverting too many valid edits versus leaving too many bad edits. Hypothetically, if 10 in every 1000 reverts over some time period are false positives and we up the threshold and bring it down to 2 in 500 reverts over the same time period, then that is 3 good edits preserved but also 500 more vandal edits that someone has to manually review and revert. Who does the burden of reverting these edits fall upon? Where is the line between potentially trading a new editor versus exhausting multiple anti-vandalism editors? What if we instead lowered the threshold and got 30 in 2000 false positives, and thus were fixing 100% more vandalism? This is a system where (broadly speaking) lowering false positives also ups the false negatives. — HELLKNOWZ ▎TALK 18:22, 6 November 2017 (UTC)
- We could...you know...disable IP editing all-together and force account creation. Less vandalism, and opportunity for such. Better false positive and false negative rates as well. :p—CYBERPOWER (Chat) 18:47, 6 November 2017 (UTC)
- I think this would be a much better discussion if we actually had such metrics. I believe the absolute number is a good indicator of the extent of the problem even if it isn't relevant technically. And I believe it is relevant technically, because it indicates the amount of potential improvement that could be achieved by refining the parts of the bot outside the Bayesian filter. A careful review of reverted good-faith edits might, for example, reveal some obvious patterns that could be used to tweak the filter threshold, or the logic around it. The Uninvited Co., Inc. 01:06, 7 November 2017 (UTC)
- Absolute numbers are not useful, that's why we discuss error rate, which includes both false positives and false negatives. Your discussion does not include the latter. There is a balance between reverting too many valid edits versus leaving too many bad edits. Hypothetically, if 10 in every 1000 reverts over some time period are false positives and we up the threshold and bring it down to 2 in 500 reverts over the same time period, then that is 3 good edits preserved but also 500 more vandal edits that someone has to manually review and revert. Who does the burden of reverting these edits fall upon? Where is the line between potentially trading a new editor versus exhausting multiple anti-vandalism editors? What if we instead lowered the threshold and got 30 in 2000 false positives, and thus were fixing 100% more vandalism? This is a system where (broadly speaking) lowering false positives also ups the false negatives. — HELLKNOWZ ▎TALK 18:22, 6 November 2017 (UTC)
- Definitions are everything -- The assertion is made: "<0.1% means less than one edit is falsely categorized as vandalism out of every thousand edits it examines."
No that's not what it means. It means less than one in a thousand is observed by a human editor as incorrectly categorized, who then follows the not-so-simple process to report it. For those pages no one follows, most of ClueBot's activities are unmonitored. Rhadow (talk) 15:34, 12 November 2017 (UTC)- Yes, definitions are everything. We don't calculate that number based on reports. That number is calculated by dividing the training data randomly in half and giving half of the training data to the engine to train it, and then giving the rest of the training data to it as if they were live edits. It has to categorize them correctly with a false positive rate of less than 0.1%. That is, for every 1,000 edits we feed it for testing, only one can be a false positive. And this is just the core engine, before any sanity checks like the rest of that diagram after the "above threshold" box. See this FAQ entry. Please don't make uninformed assertions without doing at least a little bit of research. No where have we ever said that the false positive rate is based on reported false positives, and asserting it like you know it as fact is not an appropriate way of bringing up questions or theories. Neither is it appropriate to assert as true that my factual statements, backed up by process and code that are both publicly review-able, are definitively wrong. -- Cobi(t|c|b) 22:20, 12 November 2017 (UTC)
- Thank you Cobi, for sending us to the definition of the published false positive rate (FPR). This is a second-semester epidemiology statistics exercise, made slightly more complicated by the third-semester definition of training sets used in AI. Publishing a false positive rate (Type I errors) from the training exercise is incomplete if not misleading. It would be more informative to see the whole confusion matrix. ClueBot uses a neural network which, unlike other classification methods, may give superior numeric results, but may never provide an explanation of how it identified a vandal's edit. An outsider needs the whole picture of the results in order to have the same level of confidence you do.
People would have a higher level confidence in the protocol if they knew the size and the age of the training set. If the training set is not a valid sample of today's production data, then the 0.1% FPR is meaningless. I would like to see the rate of reported false positives each week or month from the actual data, not what the expected rate was from the training set. Rhadow (talk) 15:18, 13 November 2017 (UTC)- All of the data is available either on the report website or on the Wikipedia API itself. You are welcome to generate any statistics you like. -- Cobi(t|c|b) 18:33, 13 November 2017 (UTC)
- Thank you Cobi, for sending us to the definition of the published false positive rate (FPR). This is a second-semester epidemiology statistics exercise, made slightly more complicated by the third-semester definition of training sets used in AI. Publishing a false positive rate (Type I errors) from the training exercise is incomplete if not misleading. It would be more informative to see the whole confusion matrix. ClueBot uses a neural network which, unlike other classification methods, may give superior numeric results, but may never provide an explanation of how it identified a vandal's edit. An outsider needs the whole picture of the results in order to have the same level of confidence you do.
- Hello The Uninvited -- You are correct, ClueBot III cleans up its own talk page Cluebot talk page frequently, so that a casual visitor will fine no evidence of complaints.
And another observation -- the 0.1% denominator means nothing without a discussion of the numerator. There were 3.3 million edits last month. Of those, it looks like ClueBot makes about 30 revisions an hour or 21,000 a month. I rather doubt there are editors looking at 21,000 reversions a month. No more than 210 miscategorized articles are being reported a month. The more ClueBot does, the better the numbers look, because there are no humans to check on it. Rhadow (talk) 15:58, 12 November 2017 (UTC)- Before talking about calculations, please get your definitions correct. . The archival settings for User talk:ClueBot Commons are set to 7 days, a common setting for user talk pages. The archives are there for anyone who wishes to look into the archives, and I am certainly open to anyone who wants to revisit discussions that were archived too soon to do so. Just, if you do so, add something to the conversation, because otherwise there is no value in pulling it from the archives. -- Cobi(t|c|b) 22:32, 12 November 2017 (UTC)
- Is this report based on a single diff (23 October 2017) of ClueBot making a revert of an edit that a human might judge to be good faith, and so would merely click "undo" rather than "rollback"? The most important editor at Wikipedia is ClueBot because reverting vandalism quickly is key to convincing vandals that their time would be better spent at other websites. The most important person at Wikipedia is Cobi, ClueBot's maintainer. I agree that ClueBot's talk is archived too aggressively but some more generic discussion (WP:VPMISC?) about ClueBot's possible mistakes should occur rather than insisting that Cobi personally respond to each complaint. It is impossible for a bot to revert vandalism without occasionally reverting good-faith edits. Experience shows that is also impossible for humans. Johnuniq (talk) 21:55, 12 November 2017 (UTC)
- I'm with Cobi here. It's par for the course when users that are clueless about how bots work, or the work that goes into them, come up demanding the bot to be perfect, but sometimes I really scratch my head when someone persists/piles on with no knowledge of said topic. Bots are never flawless, neither are humans, getting things right is all about balance. Just like ClueBot NG, it's similar for me with User:InternetArchiveBot.—CYBERPOWER (Around) 02:29, 13 November 2017 (UTC)
- Seconded. This bot is very useful with false positives within acceptable range. Humans are also there to correct its errors. —PaleoNeonate – 07:11, 13 November 2017 (UTC)
- (Off-topic) People seem to demand perfection for everything and get annoyed when there's a problem. Today the PRESTO card system was experiencing some difficulties and I see people dumping on the system on Twitter saying it has "nothing but problems" when in reality it works fine 99% of the time. Sounds similar to some of the nonsense I've seen on Wikipedia over the years about CBNG (e.g. "ClueBot is clueless" and what other creatively thought-of insults for the bot that has clearly been a WP:NETPOSITIVE, if we looked at bots that way). SMH. —k6ka 🍁 (Talk · Contributions) 01:15, 17 November 2017 (UTC)
Residual issues resulting from the Maintenance script bot's edits in 2015
See this discussion on Meta. Old/invalid accounts were renamed & given new "enwiki" names by the Maintenance script bot but the original accounts apparently weren't closed & account info wasn't migrated to the new/valid accounts... So. Editors are continuing to edit under the old/invalid accounts. Shearonink (talk) 16:30, 9 November 2017 (UTC)
- Not sure this is a BOTN issue, especially since it's being dealt with at meta. Primefac (talk) 16:34, 9 November 2017 (UTC)
- (edit conflict) No comment here. I was just trying to bring it to someone's attention. I've topic banned myself from BOTN for CIR reasons. GMGtalk 16:34, 9 November 2017 (UTC)
- Yes, this probably isn't the completely correct place for a notice about it - I admit I don't operate bots, etc. - but it is an ongoing issue affecting Wikipedia-editing today so I thought it might need some more eyes on it. Could people have two Wikipedia accounts - both the original account that was renamed and the new account - and possibly be editing from both? Anyway, I'll wait for an answer on meta then. Shearonink (talk) 16:48, 9 November 2017 (UTC)
- This isn't a bot related issue, but a part of SUL finalization. —k6ka 🍁 (Talk · Contributions) 01:17, 17 November 2017 (UTC)
Appeal by Δ (BetaCommand)
The community is invited to comment on the appeal lodged by Δ at Arbitration Requests for Clarification and Amendment.
- For the arbitration committee - GoldenRing (talk) 11:13, 18 November 2017 (UTC)
Double-redirect tagging
While the discussion at Wikipedia talk:Double redirects#The bots should operate with a delay has pretty much died down without clear consensus, there's been a suggestion that double-redirect-fixing bots should tag the redirects they fix with {{R avoided double redirect}}. This will help alert human editors to redirects that are left pointing to the wrong location as a result of disputed moves or mergers being reverted. Can this be implemented? Pinging bot operators R'n'B, Xqt and Avicennasis. --Paul_012 (talk) 10:12, 21 November 2017 (UTC)
- I propose to file a bug at phabricator that this proposal could be implemented in the redirect.py script of the common pywikibot repository. @xqt 11:49, 21 November 2017 (UTC)
- I would certainly oppose a bot adding {{R avoided double redirect}}. Move a page like Proceedings of the Royal Society, and then you'd have 48 redirects tagged with that for no real reason.Headbomb {t · c · p · b} 12:31, 21 November 2017 (UTC)
- What if limited to redirects which aren't the result of page moves? My original concern was mostly with pages that were changed into redirects and then reverted. --Paul_012 (talk) 23:51, 21 November 2017 (UTC)
- I would certainly oppose a bot adding {{R avoided double redirect}}. Move a page like Proceedings of the Royal Society, and then you'd have 48 redirects tagged with that for no real reason.Headbomb {t · c · p · b} 12:31, 21 November 2017 (UTC)
ARBCOM on Wikidata
See Wikipedia talk:Arbitration/Requests#Crosswiki issues: Motion (November 2017). This will be relevant both to WP:BAG members and Wikidata-related bot operators. Headbomb {t · c · p · b} 00:37, 28 November 2017 (UTC)
- How will that affect me. I’m a little confused here?—CYBERPOWER (Around) 01:56, 28 November 2017 (UTC)
- Looks like the immediate impact would be to BAG, from (C) - task approvals for any large tasks involving wikidata integration will need an enhanced level of community support and wider advertisement for input. — xaosflux Talk 02:32, 28 November 2017 (UTC)
- Given how controversial Wikidata integration has historically been, I'd say ArbCom's motion shouldn't make much difference there since we should already have been demanding strong consensus. Anomie⚔ 21:13, 30 November 2017 (UTC)
- I would recommend placing all Wikidata-related bot requests on hold if filed after the time of the motion. Until we have a larger RfC result, these changes are extremely controversial and wide-scale changes explicitly discouraged by the motion. ~ Rob13Talk 14:29, 28 November 2017 (UTC)
They are voting on the future of AWB (a program used for powering bots)
Since AWB has a bot flag, that turns it into a bot engine, I thought you might want to know about a vote going on that will affect the nature of that program: