Wikipedia:Bot requests/Archive 72
This is an archive of past discussions about Wikipedia:Bot requests. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current main page. |
Archive 65 | ← | Archive 70 | Archive 71 | Archive 72 | Archive 73 | Archive 74 | Archive 75 |
Infobox video game related edits
Hi everyone,
I am a frequent editor of video game-related articles. I'm not really familiar with bots, so if this is a stupid question, my apologies. In the {{Infobox video game}}, the |modes=
is for single-player, multiplayer, or both. Often other modes are introduced, like multiplayer online game, or specifically mentioning "2 player". The |engine=
is intended for game engines with an established, independent article, and not for middleware, such as Havok (see Wolfenstein (2009 video game). Some games use an engine based upon the engine used in previous game and add a link to the game in the infobox (see South Park; sometimes the word "modified" is added, which doesn't say anything on how it modified (see Garry's Mod. Is there a way for a bot to systemically go through all of the WP:VG articles and change these things accordingly? soetermans. ↑↑↓↓←→←→ B A TALK 15:45, 11 July 2016 (UTC)
- What are the changes you want to be made? KSFTC 15:56, 11 July 2016 (UTC)
- Could it be so that only video game engines with their own article are mentioned in the engine parameter? That "modified" or a link to another video game is automatically removed? And that in the modes parameter, if that is the case, only [[Single-player video game|Single-player]] or [[Single-player video game|Single-player]], [[Multiplayer video game|multiplayer]] or [[Multiplayer video game|Multiplayer]] is listed? soetermans. ↑↑↓↓←→←→ B A TALK 16:06, 11 July 2016 (UTC)
- I'm sorry if this sounds all too vague. soetermans. ↑↑↓↓←→←→ B A TALK 16:07, 11 July 2016 (UTC)
- (I'm not a bot operator here, so I won't make the real changes, but...) OK,
|modes=
will be pretty easy to do.|engines=
will be a little bit harder. Such edit probably won't be hard to make, but determining, if the target article is about video game or video game engine - that will be a bit harder. So video game engine will have {{Infobox Software}}, right? If target article doesn't have it, then remove it? Of course, we can give you a list of infobox parameter values for manual review, if it would suit you. --Edgars2007 (talk/contribs) 16:58, 11 July 2016 (UTC)- It might be better to add code to the infobox template that detects unsupported values for
|engine=
and|mode=
and places the articles in a maintenance category. That way, you would not need a bot. If you start a discussion at WP:VG or elsewhere, ping me and I can try to help with the template code. – Jonesey95 (talk) 17:02, 11 July 2016 (UTC)
- It might be better to add code to the infobox template that detects unsupported values for
- (I'm not a bot operator here, so I won't make the real changes, but...) OK,
- I'm sorry if this sounds all too vague. soetermans. ↑↑↓↓←→←→ B A TALK 16:07, 11 July 2016 (UTC)
- Could it be so that only video game engines with their own article are mentioned in the engine parameter? That "modified" or a link to another video game is automatically removed? And that in the modes parameter, if that is the case, only [[Single-player video game|Single-player]] or [[Single-player video game|Single-player]], [[Multiplayer video game|multiplayer]] or [[Multiplayer video game|Multiplayer]] is listed? soetermans. ↑↑↓↓←→←→ B A TALK 16:06, 11 July 2016 (UTC)
Primary School articles
Following this discussion, could anyone help set up a bot task that would
- Look for any article talk page tagged with Category:Articles in Wikipedia Primary School Project SSAJRP. Rename the category Category:Wikipedia Primary School articles
- Look for any article tagged with Category:Articles in Wikipedia Primary School Project SSAJRP in the main name space
- Remove that category from the main namespace
- Add the category in the article talk space with a category name change into Category:Wikipedia Primary School articles
Thank you for your help
Anthere (talk) 07:51, 13 July 2016 (UTC)
Well, thank you anyway if you read me at least. I take it I will have to do it by hand. Oh well. Anthere (talk) 17:54, 17 July 2016 (UTC)
- This doesn't have to be done by hand. This can be done by WP:AWB fairly quickly as well. There's about 280 or so pages either way. -- Ricky81682 (talk) 20:28, 20 July 2016 (UTC)
Commonscat
How about a bot that looks for missing Commons category link in articles where such a Commons category exists with lots of images? Anna Frodesiak (talk) 04:44, 4 May 2016 (UTC)
- @Anna Frodesiak If I'm not mistaken, there is a basic form of this is implemented via Wikidata; did you have something more specific in mind? -FASTILY 06:34, 23 May 2016 (UTC)
- Fastily, I think Anna means that plenty of articles that doesnt have the Commons template for whatever reason. And that a bot that locates and adds the Commons template to the said articles would be beneficial.--BabbaQ (talk) 18:03, 29 May 2016 (UTC)
- I suppose the bot would find Commons categories by checking if there's a Commons link under the sitelinks listed in the Wikidata item for a given article? Enterprisey (talk!) (formerly APerson) 00:59, 25 June 2016 (UTC)
- Doing... I'm working on this. KSFTC 04:46, 2 July 2016 (UTC)
- Not done but Possible – I have run into problems that I don't know how to fix. Maybe someone more experienced can do this. KSFTC 20:15, 23 July 2016 (UTC)
Could someone please update Wikipedia:WikiProject Stub sorting/Uncatted stubs? This should be done once in a while, ad it hasn't been done since March 2015. עוד מישהו Od Mishehu 04:10, 15 July 2016 (UTC)
- Od Mishehu, how often do you want it updated? Enterprisey (talk!) (formerly APerson) 03:41, 24 July 2016 (UTC)
- Monthly would probably be best. עוד מישהו Od Mishehu 03:43, 24 July 2016 (UTC)
- Great, that's what I was thinking. I'm setting up the task now; it'll probably end up in APersonBot's userspace (so a BRFA isn't required), but I can transclude it in the stub sorting project's space. Enterprisey (talk!) (formerly APerson) 03:44, 24 July 2016 (UTC)
- So apparently running write queries on labs is hard without proper configuration; I'll continue working on this, but any other bot operator is free to take this task and run with it. Enterprisey (talk!) (formerly APerson) 05:30, 24 July 2016 (UTC)
- Progress report: The bot currently does a good job of printing uncatted stub titles, but it isn't good at counting transclusions. Fix coming soon. Enterprisey (talk!) (formerly APerson) 04:03, 25 July 2016 (UTC)
- So apparently running write queries on labs is hard without proper configuration; I'll continue working on this, but any other bot operator is free to take this task and run with it. Enterprisey (talk!) (formerly APerson) 05:30, 24 July 2016 (UTC)
- Great, that's what I was thinking. I'm setting up the task now; it'll probably end up in APersonBot's userspace (so a BRFA isn't required), but I can transclude it in the stub sorting project's space. Enterprisey (talk!) (formerly APerson) 03:44, 24 July 2016 (UTC)
- Monthly would probably be best. עוד מישהו Od Mishehu 03:43, 24 July 2016 (UTC)
Could someone please update Wikipedia:WikiProject Stub sorting/Uncatted stubs? This should be done once in a while, ad it hasn't been done since March 2015. עוד מישהו Od Mishehu 04:10, 15 July 2016 (UTC)
- Od Mishehu, how often do you want it updated? Enterprisey (talk!) (formerly APerson) 03:41, 24 July 2016 (UTC)
- Monthly would probably be best. עוד מישהו Od Mishehu 03:43, 24 July 2016 (UTC)
- Great, that's what I was thinking. I'm setting up the task now; it'll probably end up in APersonBot's userspace (so a BRFA isn't required), but I can transclude it in the stub sorting project's space. Enterprisey (talk!) (formerly APerson) 03:44, 24 July 2016 (UTC)
- So apparently running write queries on labs is hard without proper configuration; I'll continue working on this, but any other bot operator is free to take this task and run with it. Enterprisey (talk!) (formerly APerson) 05:30, 24 July 2016 (UTC)
- Progress report: The bot currently does a good job of printing uncatted stub titles, but it isn't good at counting transclusions. Fix coming soon. Enterprisey (talk!) (formerly APerson) 04:03, 25 July 2016 (UTC)
- So apparently running write queries on labs is hard without proper configuration; I'll continue working on this, but any other bot operator is free to take this task and run with it. Enterprisey (talk!) (formerly APerson) 05:30, 24 July 2016 (UTC)
- Great, that's what I was thinking. I'm setting up the task now; it'll probably end up in APersonBot's userspace (so a BRFA isn't required), but I can transclude it in the stub sorting project's space. Enterprisey (talk!) (formerly APerson) 03:44, 24 July 2016 (UTC)
- Monthly would probably be best. עוד מישהו Od Mishehu 03:43, 24 July 2016 (UTC)
migrate Library of Congress thomas links to congress.gov
the Library of Congress has refreshed their website, but the archiving is a problem. could we have a bot correct all the references & links to thomas to the congress.gov domain? Beatley (talk)
here is a target list https://en.wikipedia.org/w/index.php?target=http%3A%2F%2F*.thomas.loc.gov&title=Special%3ALinkSearch
- @Beatley: This doesn't seem to be really all that necessary. All the links retarget to the new page, so why do we need to update them? Omni Flames (talk) 00:42, 3 August 2016 (UTC)
- i understand "if it ain't broke", but it's easier to fix before the links rot? and the nice librarians at the LOC did ask. Beatley (talk) 20:37, 3 August 2016 (UTC)
Coding... trying my hand. ProgrammingGeek (Page! • Talk! • Contribs!) 16:44, 4 August 2016 (UTC)
- Since ProgrammingGeek seems to be away from Wikipedia for a bit, Beatley, Not a good task for a bot. since the URL after the domain seems to commonly turn to a 404. IABot will get to this when it's time. Dat GuyTalkContribs 18:59, 16 September 2016 (UTC)
- Ditto. Some links such as this redirect to a 404. If you replaced thomas.loc.gov with congress.gov, the archiving bot or human who comes by to rescue it won't be able to fix it, since there is no archived version at congress.gov. That being said you could still replace all links to thomas.loc.gov (without a path following it) with congress.gov, but that's not particularly helpful and is a redirect that's likely to stay in place. In many cases a visible link to thomas.loc.gov might be desirable, even if it does redirect — MusikAnimal talk 19:02, 16 September 2016 (UTC)
Convert dead Google Cache links to Wayback Machine
- Something related was discussed by Sfan00 IMG and Redrose64 in October 2012, but not further pursued.
We have a whole lot of Google Cache links, unfortunately most of them dead (it seems, unlike the Internet Archive, Google Cache is only a temporary thing). It would be nice to have a bot convert these links to Wayback Machine links, like I manually did here. The Google cache links contain the URL information:
http://webcache.googleusercontent.com/search?q=cache:umS520jojVYJ:www.aals.org/profdev/women/clark.pdf+%22Joan+Krauskopf%22+%22Eighth+Circuit%22&hl=en&ct=clnk&cd=5&gl=us
→ [1][dead link]
to a {{Wayback}} link like
{{Wayback |url=http://www.aals.org/profdev/women/clark.pdf }}
→ Archive index at the Wayback Machine
I don't know how hard it would be for a bot to also fill the |date=
and |title=
parameters of {{Wayback}}, but that would be optional anyways. Maybe it could if the raw Google Cache link above had [http... some title]
to it. Anyhow, just fixing the dead Google Cache links would be a valuable service in itself.
Of course, the above mentioned usage of {{Wayback}} goes for raw links like the one I fixed. If the Google Cache link was in a citation template's |archiveurl=
parameter, then the fix should be
|archiveurl=http://webcache.googleusercontent.com/search?q=cache:umS520jojVYJ:www.aals.org/profdev/women/clark.pdf+%22Joan+Krauskopf%22+%22Eighth+Circuit%22&hl=en&ct=clnk&cd=5&gl=us
to
|archiveurl=https://web.archive.org/web/*/www.aals.org/profdev/women/clark.pdf
--bender235 (talk) 14:01, 4 August 2016 (UTC)
- This can be added to InternetArchiveBot's functionality. It's easy to add acknowledgement of archiving services and whether to flag them invalid or not. If you're patient I can try and add it in the upcoming 1.2 release.—cyberpowerChat:Online 16:22, 4 August 2016 (UTC)
- Sure, we're not in a rush. Thank you. --bender235 (talk) 17:33, 4 August 2016 (UTC)
- Some points I should point out, some snapshots don't exist in the Wayback machine. Since it's obvious that google cache is only temporary, and that when a site dies, its respective cache will too. That being said, it's probably better to simply remove the cache and tag the URL as dead.—cyberpowerChat:Online 09:46, 5 August 2016 (UTC)
- If there's no snapshot on WBM either, then yes. But if Google Cache (and the original URL) are dead, repair it as a WBM link. --bender235 (talk) 13:32, 5 August 2016 (UTC)
- Some points I should point out, some snapshots don't exist in the Wayback machine. Since it's obvious that google cache is only temporary, and that when a site dies, its respective cache will too. That being said, it's probably better to simply remove the cache and tag the URL as dead.—cyberpowerChat:Online 09:46, 5 August 2016 (UTC)
- Sure, we're not in a rush. Thank you. --bender235 (talk) 17:33, 4 August 2016 (UTC)
- This can be added to InternetArchiveBot's functionality. It's easy to add acknowledgement of archiving services and whether to flag them invalid or not. If you're patient I can try and add it in the upcoming 1.2 release.—cyberpowerChat:Online 16:22, 4 August 2016 (UTC)
- Oh, InternetArchiveBot does that now? Is that a new feature, or has it always been converting Google Cache to Internet Archive? --bender235 (talk) 00:16, 14 September 2016 (UTC)
- I added it because you requested it.—cyberpowerChat:Offline 00:22, 14 September 2016 (UTC)
- Oh, InternetArchiveBot does that now? Is that a new feature, or has it always been converting Google Cache to Internet Archive? --bender235 (talk) 00:16, 14 September 2016 (UTC)
Non-nested citation templates
Please can someone draw up a list of templates whose name begins Template:Cite
, but which do not themselves either wrap a template with such a name, or invoke a citation module?
For example:
- {{Cite web}} invokes citation/CS1
- {{Community trademark}} wraps {{Cite web}}
- {{Cite CanLII}} does not wrap another 'cite' template and does not invoke a citation module
I would therefore only expect to see the latter in the results.
The list could either be dumped to a user page, or preferably, a hidden tracking category, say Category:Citation templates without standard citation metadata, could be added to their documentation. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 17:16, 14 August 2016 (UTC)
- I don't have access to database dumps, but petscan might be able to help. Here's a first stab at a query. – Jonesey95 (talk) 18:07, 14 August 2016 (UTC)
- @Jonesey95: Thank you. Alas, my recent experience suggests that the relevant categories are not applied in many cases. One of the purposes of my request is to enable doing so. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 21:00, 14 August 2016 (UTC)
- Does this link work? It seems pretty basic. It might miss a few that for some reason include the text "cite" or "citation" in the template directly, but I don't think that will be many if any such templates. --Izno (talk) 16:08, 23 August 2016 (UTC)
- @Izno: It's useful thank you, though it does bring up a number of /doc, /sandbox and /testcase pages. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 10:28, 7 September 2016 (UTC)
- @Pigsonthewing: [3] --Izno (talk) 11:26, 7 September 2016 (UTC)
- @Izno: That's great; thank you. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:37, 8 September 2016 (UTC)
- @Pigsonthewing: [3] --Izno (talk) 11:26, 7 September 2016 (UTC)
- @Izno: It's useful thank you, though it does bring up a number of /doc, /sandbox and /testcase pages. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 10:28, 7 September 2016 (UTC)
Gawker
Gawker (Gawker Media) has been driven into bankruptcy, and then bought out by Univision, which will be shutting it down next week. We've got a lot of articles that cite Gawker pages. Can someone send a bot through the database as a whole, looking for everything cited to Gawker, and then making sure that it's archived (archive.org, WebCite, etc)? DS (talk) 19:23, 18 August 2016 (UTC)
- Perhaps Green Cardamom or Cyberpower678 could handle this? They've both run bots related to archives in the past. This is an extremely high-priority task. ~ Rob13Talk 19:36, 18 August 2016 (UTC)
- InternetArchiveBot maintains a massive database of URLs it encountered on Wikipedia and specific information about them, including their live states. I can set the states of all the URLs of this domain to dead and the bot will act on it.—cyberpowerChat:Limited Access 20:16, 18 August 2016 (UTC)
- @Cyberpower678: I noticed InternetArchiveBot has been disabled for a bit. What's the reason for that? ~ Rob13Talk 20:46, 18 August 2016 (UTC)
- A lot of bugs have been reported. They're all fixed, but given the massive scope of this bot, it's being extensively tested before being re-enabled.—cyberpowerChat:Limited Access 21:13, 18 August 2016 (UTC)
- @Cyberpower678: I noticed InternetArchiveBot has been disabled for a bit. What's the reason for that? ~ Rob13Talk 20:46, 18 August 2016 (UTC)
- InternetArchiveBot maintains a massive database of URLs it encountered on Wikipedia and specific information about them, including their live states. I can set the states of all the URLs of this domain to dead and the bot will act on it.—cyberpowerChat:Limited Access 20:16, 18 August 2016 (UTC)
When archived at Internet Archive, Univision has the option to block viewing at any time for the whole domain with a single line in robots.txt .. I don't know if Univision would do that but once Peter Theil learns the articles are still available online it seems likely he would put pressure on Univision. WebCite only checks robots.txt at the time of archival. Archive.is doesn't care about robots and is outside US law. Maybe I can generate a list of the Gawker URLs and trigger a save for WebCite and archive.is but I haven't done that in an automated fashion before so don't know if it will work. -- GreenC 21:14, 18 August 2016 (UTC)
- I believe WebCite supports archiving, even in batches, of URLs over their API, which is XML based. It will then return the respective archive URL. If you wrote a script to handle that, and generate an SQL script, I can run it my DB and InternetArchiveBot can then make the alterations. I can very quickly populate the respective URLs we are dealing with.—cyberpowerChat:Limited Access 21:23, 18 August 2016 (UTC)
- It looks like archive.is has already done the work. Every link I spot checked exists at archive.is .. also I checked webcitation.org and can't find docs on batch archiving. And I read they will take down pages on request by copyright owners so same situation as archive.org with robots. Maybe the thing to do is save by default to Wayback and if robots.txt blocks access deal with that later the normal way (not established yet). At least there is backup at archive.is and probably less than 1500 original URLs total. -- GreenC 23:44, 18 August 2016 (UTC)
- Just a note that archive.is seems to be better at archiving Twitter posts, which Gawker articles refer to frequently. I think that might be the better choice for completeness of the archive. The WordsmithTalk to me 21:49, 18 August 2016 (UTC)
- Does it help that all of Gawker's original content is CC-attribution-noncommercial? DS (talk) 22:05, 18 August 2016 (UTC)
- The ToU says "Gawker Media's original content" but is some of it may be by guest writers plus user comments I would be wary of a bot sweeping up everything as CC. -- GreenC 23:44, 18 August 2016 (UTC)
- Does it help that all of Gawker's original content is CC-attribution-noncommercial? DS (talk) 22:05, 18 August 2016 (UTC)
- InternetArchiveBot now considers the entire gawker domain as dead.—cyberpowerChat:Absent 13:39, 22 August 2016 (UTC)
Archivebot
I know that the new archivebot has started working. But the backlog is enormous with dead links that needs to be archived. In its current paste it would never get close to catch up. I would atleast suggest that there were two archive bots working at the same time. For example the bot has made four edits today. To have any chance of catching up it would need to be active 24/7. BabbaQ (talk) 23:01, 10 September 2016 (UTC)
- IABot is actually designed to be extremely fast and will have no trouble, I've seen it check 5 million articles in 12 hours or so. Right now it's testing new features. A few months ago it rescued over 140k links increasing the number of Wayback links in the English Wikipedia by about 50%. -- GreenC 00:03, 11 September 2016 (UTC)
- Well, hopefully it will start working fast soon. A backlog of 90.000 articles with dead links are currently waiting. Regards,--BabbaQ (talk) 19:36, 11 September 2016 (UTC)
- I should not that at least 50,000 of them will need manual intervention since IABot was unable to rescue them, meaning no viable archive was found for at least one link on that couldn't be rescued that is tagged.—cyberpowerChat:Offline 22:52, 11 September 2016 (UTC)
- Well, hopefully it will start working fast soon. A backlog of 90.000 articles with dead links are currently waiting. Regards,--BabbaQ (talk) 19:36, 11 September 2016 (UTC)
A bot that can automatically edit the wikilinks
If an article is renamed, should the wikilinks to that article, in other articles, be changed automatically by a bot to reflect the new article name. It might help users from edit the wikilinks by themselves, because articles get renamed every day. TheAmazingPeanuts (talk) 09:26, 18 September 2016 (UTC)
- I would say, that is is case of WP:NOTBROKEN. --Edgars2007 (talk/contribs) 15:16, 18 September 2016 (UTC)
- Agree with Edgars2007, why would you want to replace a redirect with the page the redirect redirects to? That's kinda the point of redirects. Therefore, Not done Dat GuyTalkContribs 15:55, 18 September 2016 (UTC)
- @Edgars2007: @DatGuy: So you guys are saying it is unnecessary for a bot to automatically charged wikilinks such as this this right? TheAmazingPeanuts (talk) 12:56, 18 September 2016 (UTC)
- Correct. As long as T-Minus (producer) continues to redirect to T-Minus (record producer), there is no need to change any links there. bd2412 T 13:25, 23 September 2016 (UTC)
- @BD2412: Well okay, I got my answer here. Thanks. TheAmazingPeanuts (talk) 16:47, 23 September 2016 (UTC)
- Correct. As long as T-Minus (producer) continues to redirect to T-Minus (record producer), there is no need to change any links there. bd2412 T 13:25, 23 September 2016 (UTC)
- @Edgars2007: @DatGuy: So you guys are saying it is unnecessary for a bot to automatically charged wikilinks such as this this right? TheAmazingPeanuts (talk) 12:56, 18 September 2016 (UTC)
Pages without infoboxes
I am trying to take on a new task of adding infoboxes to any page that doesn't have one. It would be great to have a bot that helps categorize these. At the moment I am working off of pages that link to {{Infobox requested}}. The problem is that when people add an infobox to a page, they rarely remove that template from the talk page. So I see two things that this bot could do...
- If a page has an infobox, remove {{Infobox requested}} from the talk page
- If a page does NOT have an infobox, instead of worrying about adding a template to the talk page, add the page to a hidden maintenance category Category:Pages in need of an infobox
Just my thinking. --Zackmann08 (Talk to me/What I been doing) 17:56, 29 July 2016 (UTC)
- The first task seems a great idea to me, there are far to many irrelevant and outdated templates on the site and this would cull some unneeded ones. The second idea assumes that all pages need an infobox. Welcome to the infobox wars. I suggest you drop that idea until such a time as we have consensus to create infoboxes on all pages. ϢereSpielChequers 18:11, 29 July 2016 (UTC)
- Let me ping all editors from the Zeta-Jones RfC so we can discuss this further. (Kidding, of course.) The first idea is interesting, and I might take it up. Let me see if it's feasible with AWB. It should be. ~ Rob13Talk 18:17, 29 July 2016 (UTC)
- I can do the first task easily, either manually with AWB or using a bot running on AWB. As for the second task, I think you should probably seek consensus for it first. Omni Flames (talk) 06:06, 30 July 2016 (UTC)
- BRFA filed Omni Flames (talk) 07:55, 30 July 2016 (UTC)
- I can do the first task easily, either manually with AWB or using a bot running on AWB. As for the second task, I think you should probably seek consensus for it first. Omni Flames (talk) 06:06, 30 July 2016 (UTC)
- Let me ping all editors from the Zeta-Jones RfC so we can discuss this further. (Kidding, of course.) The first idea is interesting, and I might take it up. Let me see if it's feasible with AWB. It should be. ~ Rob13Talk 18:17, 29 July 2016 (UTC)
"a" before initial vowel in double square brackets
Anyone interested fixing this? See T100721 for more details in necessary. -- Magioladitis (talk) 07:47, 1 August 2016 (UTC)
- @Magioladitis: I don't think this is suitable as a bot request. There are always cases where the "a" isn't the English indefinite article, or the initial vowel isn't pronounced as a vowel sound. There are lots of examples at User:Sun Creator/A to An. -- John of Reading (talk) 08:11, 1 August 2016 (UTC)
John of Reading OK this means, this task is not suitable for AWB neither. We need someone to create a list of pages then and see how many occurrences are there. -- Magioladitis (talk) 08:17, 1 August 2016 (UTC)
I mean as a general fix. Any other suggestions are welcome but they should be tested before. -- Magioladitis (talk) 10:34, 1 August 2016 (UTC)
List of navboxes using bodystyle=width:auto
My July 15 edit to Module:Navbox (diff) changed its outer container from a single cell layout <table>
to a <div>
. It was later reported that this broke navboxes such as Old revision of Template:Events by month links/box that used |bodystyle=width:auto
to reduce the container's width to the width of its contents — this works with tables, but not divs.
I suspect there are at least a few other navboxes that do this. I'd like a list of Template namespace transclusions of {{Navbox}}, {{Navbox with collapsible groups}}, {{Navbox with columns}}, and their redirects, that use either |bodystyle=width:auto
or |style=width:auto
, so I can fix them manually, but I'm not sure how to compile such a list myself. The regex would be tricky, for starters, since there may be spaces, newlines, and style declarations preceding width:auto. Matt Fitzpatrick (talk) 11:10, 22 July 2016 (UTC)
- Doing... Omni Flames (talk) 11:19, 1 August 2016 (UTC)
Change interface language to English in Google links
- This has been first suggested in 2014 by Sfan00 IMG (talk · contribs) but apparently never implemented.
A lot of the millions of Google Books, Google News, etc. links on Wikipedia carry a non-English language option. For example (in Midrakh Oz) the link
https://books.google.co.il/books?id=3-a0L0VACWYC&lpg=PP1&hl=iw&pg=PA202#v=onepage&q&f=false
opens Google Books in Hebrew. Since this is the English Wikipedia, the link should instead be
https://books.google.co.il/books?id=3-a0L0VACWYC&lpg=PP1&hl=en&pg=PA202#v=onepage&q&f=false
which opens Google Books in English. As Sfan00 IMG (talk · contribs) wrote back in 2014, we basically need a bot that looks for a link of the form *.google.*
, looks for the regex &hl=((.)*)&
and replaces it with &hl=en&
. --bender235 (talk) 16:29, 5 August 2016 (UTC)
- Should it also be
books.google.com
instead ofbooks.google.co.il
? An insource search forinsource:/books\.google\.co\.[a-tv-z]/
shows 7,000 pages linking to non-".uk" versions of Google Books. The search may need to be refined, but that looks like a first cut at finding target articles. – Jonesey95 (talk) 16:38, 5 August 2016 (UTC)- Well, in general it should be
books.google.com
instead ofbooks.google.[not com]
, as a simple precautionary security measure. In some countries, particular top-level domains may raise suspicion of authorities and ISPs (*.co.il
in some Arab countries,*.com.tw
in China, etc.), so transforming these to the genericbooks.google.com
might not be a bad idea. - However, this is a yet another fix than the one I suggested above. --bender235 (talk) 16:45, 5 August 2016 (UTC)
- Now that Quarry exists, it should be even easier to generate an initla list of links to fix ;) Sfan00 IMG (talk) 17:53, 5 August 2016 (UTC)
- The abov said I think in some instanced the not linking via .com is too do with the fact that different Google Books domains apply differing interpretations of what is and isn't copyright/ licensed in a particular region. .com I think follows the US rules, whereas .ca, .uk ,.au etc follow more localised rules IIRC. Sfan00 IMG (talk) 23:02, 5 August 2016 (UTC)
- No, that is not true. In cases of different copyright situations between, say, US and France, Google discriminates based on your IP address, not the URL you browse to. In other words, if something is illegal in France and you are currently in France, you won't see it regardless of whether you click
books.google.fr
orbooks.google.com
. --bender235 (talk) 17:22, 6 August 2016 (UTC)
- No, that is not true. In cases of different copyright situations between, say, US and France, Google discriminates based on your IP address, not the URL you browse to. In other words, if something is illegal in France and you are currently in France, you won't see it regardless of whether you click
- Well, in general it should be
Requested articles check against draftspace
Would it be possible for a bot to check each of the redlinks within the subpages listed within Wikipedia:WikiProject Requested articles against draftspace and just create a list of which pages have drafts created for them? The pages that are blue are obviously created (or currently redirects) but if someone started a draft on it, it would be easier to remove it and link to the draft (which I'll review and do manually) and merge whatever content if any is listed there. I doubt it's a lot (if there's any) but it would helpful to know since some pages there have sources worth incorporating and working on. Basically, it would be (1) have a list of redlinks; (2) check if Draft:Redlink exists for each one and (3) if it exists, give me the page where you found it and the draftspace link. I suspect we'll have false positive as well but I think it'd be helpful. -- Ricky81682 (talk) 07:45, 7 August 2016 (UTC)
Change interface language to English in Google links
- This has been first suggested in 2014 by Sfan00 IMG (talk · contribs) but apparently never implemented.
A lot of the millions of Google Books, Google News, etc. links on Wikipedia carry a non-English language option. For example (in Midrakh Oz) the link
https://books.google.co.il/books?id=3-a0L0VACWYC&lpg=PP1&hl=iw&pg=PA202#v=onepage&q&f=false
opens Google Books in Hebrew. Since this is the English Wikipedia, the link should instead be
https://books.google.co.il/books?id=3-a0L0VACWYC&lpg=PP1&hl=en&pg=PA202#v=onepage&q&f=false
which opens Google Books in English. As Sfan00 IMG (talk · contribs) wrote back in 2014, we basically need a bot that looks for a link of the form *.google.*
, looks for the regex &hl=((.)*)&
and replaces it with &hl=en&
. --bender235 (talk) 16:29, 5 August 2016 (UTC)
- Should it also be
books.google.com
instead ofbooks.google.co.il
? An insource search forinsource:/books\.google\.co\.[a-tv-z]/
shows 7,000 pages linking to non-".uk" versions of Google Books. The search may need to be refined, but that looks like a first cut at finding target articles. – Jonesey95 (talk) 16:38, 5 August 2016 (UTC)- Well, in general it should be
books.google.com
instead ofbooks.google.[not com]
, as a simple precautionary security measure. In some countries, particular top-level domains may raise suspicion of authorities and ISPs (*.co.il
in some Arab countries,*.com.tw
in China, etc.), so transforming these to the genericbooks.google.com
might not be a bad idea. - However, this is a yet another fix than the one I suggested above. --bender235 (talk) 16:45, 5 August 2016 (UTC)
- Now that Quarry exists, it should be even easier to generate an initla list of links to fix ;) Sfan00 IMG (talk) 17:53, 5 August 2016 (UTC)
- The abov said I think in some instanced the not linking via .com is too do with the fact that different Google Books domains apply differing interpretations of what is and isn't copyright/ licensed in a particular region. .com I think follows the US rules, whereas .ca, .uk ,.au etc follow more localised rules IIRC. Sfan00 IMG (talk) 23:02, 5 August 2016 (UTC)
- No, that is not true. In cases of different copyright situations between, say, US and France, Google discriminates based on your IP address, not the URL you browse to. In other words, if something is illegal in France and you are currently in France, you won't see it regardless of whether you click
books.google.fr
orbooks.google.com
. --bender235 (talk) 17:22, 6 August 2016 (UTC)
- No, that is not true. In cases of different copyright situations between, say, US and France, Google discriminates based on your IP address, not the URL you browse to. In other words, if something is illegal in France and you are currently in France, you won't see it regardless of whether you click
- Well, in general it should be
Requested articles check against draftspace
Would it be possible for a bot to check each of the redlinks within the subpages listed within Wikipedia:WikiProject Requested articles against draftspace and just create a list of which pages have drafts created for them? The pages that are blue are obviously created (or currently redirects) but if someone started a draft on it, it would be easier to remove it and link to the draft (which I'll review and do manually) and merge whatever content if any is listed there. I doubt it's a lot (if there's any) but it would helpful to know since some pages there have sources worth incorporating and working on. Basically, it would be (1) have a list of redlinks; (2) check if Draft:Redlink exists for each one and (3) if it exists, give me the page where you found it and the draftspace link. I suspect we'll have false positive as well but I think it'd be helpful. -- Ricky81682 (talk) 07:45, 7 August 2016 (UTC)
WikiProject Belgrade to WikiProject Serbia/Belgrade task force
Need help with point 13 ("Replace usage of the moved project's banner with the parent/task force banner") at Converting existing projects to task forces for Wikipedia:WikiProject Belgrade→Wikipedia:WikiProject Serbia/Belgrade task force. The banner is at Template:WikiProject Belgrade.--Zoupan 00:21, 5 September 2016 (UTC)
- Doing..., perhaps with a AWB bot? I'll try and finish this in two weeks, no promises :P. Dat GuyTalkContribs 17:29, 13 September 2016 (UTC)
- BRFA filed. Dat GuyTalkContribs 18:07, 13 September 2016 (UTC)
- Zoupan 385 pages left currently. Consider it Done Dat GuyTalkContribs 20:00, 1 October 2016 (UTC)
- Thank you very much, Dat Guy! Appreciate it.--Zoupan 20:03, 1 October 2016 (UTC)
- Zoupan 385 pages left currently. Consider it Done Dat GuyTalkContribs 20:00, 1 October 2016 (UTC)
- BRFA filed. Dat GuyTalkContribs 18:07, 13 September 2016 (UTC)
Autowikibrowser
Is it possible to make the AWB bot automatic? I mean if I like to add categories to pages, I have to manually hit save for everypage. I think it would be better if it was auto for task like these. Saadkhan12345 (talk) 15:54, 7 October 2016 (UTC)D
- @Saadkhan12345: If editors could make automated edits without seeking permission, that would negate the entire point of having special bot accounts and this process. See Wikipedia:Bot policy. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 16:05, 7 October 2016 (UTC) - Not done, you haven't highlighted what category you would like to add which could be automated, and it is extremely unlikely that you will have the bot flag on your normal account. Dat GuyTalkContribs 16:06, 7 October 2016 (UTC)
- You can submit a request for approval to get a bot account. Be prepared to offer quite a bit of evidence that a) you can be trusted as a bot operator and b) you can be trusted as a bot operator operating this bot and c) this bot can be trusted to make the edits you want it to make in question. It is unlikely that your BRFA will be successful, but that is the path to get AWB to operate in a bot-mode. --Izno (talk) 16:14, 7 October 2016 (UTC)
I was just saying that If i would like to add categories to pages using awb, it would be nice to auto that rather than sit there and manually hit "save". Would save a lot of time and much easier. What I am doing is requesting the ability added to awb (if possible by whoever made it lol)... Ok so i understand... this is possible but I would have to get the approval to get the bot to make automated edits...correct?Saadkhan12345 (talk) 16:45, 7 October 2016 (UTC) ok ty inzi Saadkhan12345 (talk) 16:49, 7 October 2016 (UTC)
- @Saadkhan12345: Yes, that's right. AWB already has "Automatic save" options, but you can only see them if you log in to AWB using a bot account. And to get a bot account, you have to go through the BRFA procedure to get your proposed task approved in advance. -- John of Reading (talk) 21:15, 7 October 2016 (UTC)
Bot idea: Redirect Refining Bot
Sometimes a redirect has the same exact name as a subheading of the article the redirect points to.
Finding and sectionalizing redirects of this kind looks like something that could easily be automated using a bot.
For example, the redirect "Map design" leads to "Cartography", which has a subheading titled "Map design".
The redirect didn't lead to the section until I manually added "#Map design" to the redirect's target, making it a sectional redirect, like this:
#REDIRECT [[Cartography#Map design]]
Is making a bot that does this feasible and worthwhile?
What are the design considerations? The Transhumanist 12:12, 23 July 2016 (UTC)
- Coding... KSFTC 20:15, 23 July 2016 (UTC)
- Is this always a good idea, though? Sometimes you might want to link to the lead of the article as it provides context. — Earwig talk 20:16, 23 July 2016 (UTC)
- I think it is. There are many redirects to page sections. If someone's looking for a particular part of the information on a page, they probably already have the necessary context. They can also scroll up if they don't. KSFTC 20:20, 23 July 2016 (UTC)
- I would like to see a sample of 50 random cases before letting a bot make the edits. If human evaluation is deemed necessary then a bot list of all candidates will still be useful. PrimeHunter (talk) 20:30, 23 July 2016 (UTC)
- I assume that will be part of the BRFA. KSFTC 03:11, 24 July 2016 (UTC)
- I would like to see a sample of 50 random cases before letting a bot make the edits. If human evaluation is deemed necessary then a bot list of all candidates will still be useful. PrimeHunter (talk) 20:30, 23 July 2016 (UTC)
- I think it is. There are many redirects to page sections. If someone's looking for a particular part of the information on a page, they probably already have the necessary context. They can also scroll up if they don't. KSFTC 20:20, 23 July 2016 (UTC)
- How would a bot go about finding them in the first place? The Transhumanist 01:45, 25 July 2016 (UTC)
- @The Transhumanist:I'm not sure what you mean. It isn't difficult to tell whether a page is a redirect. KSFTC 13:17, 25 July 2016 (UTC)
- Finding redirects the titles of which are subheadings of the target article. ("Map design" redirects to Cartography, which just happens to have a section titled "Map design"). So, regex aside, to find other redirects like this, you'd have to search the target article of ALL redirects for the title of the redirect with equals signs around it like this: =Map design=. Right?
If the search was positive, then you would know to change #REDIRECT Cartography to #REDIRECT Cartography#Map design, for example.
So my questions are, "How does a bot search all redirects, and then access all target articles to search for the title of the redirects?" Do you have to get and scrape every redirect page and every target page?
What kind of resources (bandwidth) would this take up? Or would you do this with the database offline, and then overwrite online the redirects needing to be changed?
There are millions of redirects, right? 5,000,000 server calls limited to one per second takes over 57 days. I'm just trying to understand what the whole task entails.
What would the bot actually have to do, step by step? The Transhumanist 22:50, 25 July 2016 (UTC) - @KSFT: The Transhumanist 20:24, 26 July 2016 (UTC)
- @The Transhumanist: I have seen this; I just kept forgetting to respond. I don't have much experience with bots, so if this is infeasible, we can decline it. I don't know how it would be done, exactly. KSFTC 20:25, 26 July 2016 (UTC)
- Finding redirects the titles of which are subheadings of the target article. ("Map design" redirects to Cartography, which just happens to have a section titled "Map design"). So, regex aside, to find other redirects like this, you'd have to search the target article of ALL redirects for the title of the redirect with equals signs around it like this: =Map design=. Right?
- @The Transhumanist:I'm not sure what you mean. It isn't difficult to tell whether a page is a redirect. KSFTC 13:17, 25 July 2016 (UTC)
- How would a bot go about finding them in the first place? The Transhumanist 01:45, 25 July 2016 (UTC)
@Rich Farmbrough and Cacycle: We need further comments. The Transhumanist 23:42, 26 July 2016 (UTC)
- The Transhumanist, we don't have to worry about the database hits, as just running SQL queries directly on the redirect table will work fine for our purposes. The chunk sizes I'm envisioning for a task like this is redirects fetched perhaps 50 at a time, and then the targets' section headers parsed 50 pages at a time, which will (with sensible delaying) probably result in an acceptable six edits per minute or so. Enterprisey (talk!) (formerly APerson) 03:54, 27 July 2016 (UTC)
- @Enterprisey: What's the next step? The Transhumanist 20:44, 29 July 2016 (UTC)
- @The Transhumanist: I'll get around to writing it eventually. I should have time this week. KSFTC 04:39, 1 August 2016 (UTC)
- @Enterprisey: How goes it? The Transhumanist 21:02, 9 August 2016 (UTC)
- @The Transhumanist: I think you pinged the wrong person. I have not gotten around to this yet. If someone else wants to, feel free. Otherwise, I still plan to eventually. KSFTC 02:30, 10 August 2016 (UTC)
- @Enterprisey: How goes it? The Transhumanist 21:02, 9 August 2016 (UTC)
- @The Transhumanist: I'll get around to writing it eventually. I should have time this week. KSFTC 04:39, 1 August 2016 (UTC)
- @Enterprisey: What's the next step? The Transhumanist 20:44, 29 July 2016 (UTC)
Another redir issue
@The Earwig, Enterprisey, KSFT, and PrimeHunter: Above we talked about fixing a redirect that points to a page that has a subheading that is the same as the title of the page being redirected.
Well, a similar situation is a redirect that does not match any subheadings in the target page, but for which we know should have a section. For example, History of domestication redirects to Domestication. But Domestication has no history section for History of domestication to point to. Should the bot discussed above be programmed to handle this type of situation too, and create empty sections? The resulting link in the redirect in this example would be Domestication#History. The availability of such sections may prompt editors to write about the history of the respective subjects. The Transhumanist 20:49, 14 August 2016 (UTC)
- This is definitely beyond what bots are smart enough to handle, I think. Maybe a web tool that gives suggestions for sections to create, but not an automated bot. — Earwig talk 20:59, 14 August 2016 (UTC)
- Agree. Empty sections are annoying enough when humans make them. A bot would make many empty sections for things already covered in existing sections, and often place the section oddly. PrimeHunter (talk) 21:18, 14 August 2016 (UTC)
Publisher
Is it possible to request that a bot adds the publisher in the |publisher=, section of the references in articles. I see articles everyday without the Publisher named. I think such a task would be beneficial for the project when people are searching for information and its publishers.--BabbaQ (talk) 12:01, 20 August 2016 (UTC)
- @BabbaQ: This couldn't be done for all references, as it may not be possible to programmatcially determine the publisher. It's also nots sensible for a citation where
|work=
includes a link to a Wikipedia article. But it may be possible (and could be added to AWB's routine tasks) for a set of commons sources. Do you have some examples in mind? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:37, 22 August 2016 (UTC)- @Pigsonthewing: - Sorry for the late reply. If I am getting your point correctly I would suggest adding sources like BBC News, Sky News, CNN News as a routine task to fill. It would be quite easy I pressume. Also for my own benefit while creating articles about Swedish subject I would appreciate if Aftonbladet, Expressen, Dagens nyheter (DN), Svenska Dagladet (SvD), The Local, was added. If I have understood you correctly those are the kind of sources I would think would benefit the Wikipedia project. --BabbaQ (talk) 14:23, 26 August 2016 (UTC)
Publisher
Is it possible to request that a bot adds the publisher in the |publisher=, section of the references in articles. I see articles everyday without the Publisher named. I think such a task would be beneficial for the project when people are searching for information and its publishers.--BabbaQ (talk) 12:01, 20 August 2016 (UTC)
- @BabbaQ: This couldn't be done for all references, as it may not be possible to programmatcially determine the publisher. It's also nots sensible for a citation where
|work=
includes a link to a Wikipedia article. But it may be possible (and could be added to AWB's routine tasks) for a set of commons sources. Do you have some examples in mind? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:37, 22 August 2016 (UTC)- @Pigsonthewing: - Sorry for the late reply. If I am getting your point correctly I would suggest adding sources like BBC News, Sky News, CNN News as a routine task to fill. It would be quite easy I pressume. Also for my own benefit while creating articles about Swedish subject I would appreciate if Aftonbladet, Expressen, Dagens nyheter (DN), Svenska Dagladet (SvD), The Local, was added. If I have understood you correctly those are the kind of sources I would think would benefit the Wikipedia project. --BabbaQ (talk) 14:23, 26 August 2016 (UTC)
Idea: Multi-wiki KML bot
Are there any bot operators willing to work on a multi-work bot task? If so, please see meta:Talk:KML files - Evad37 [talk] 04:06, 27 August 2016 (UTC)
Add Category:Tour de France cyclists to Tour de France task force
I started Wikipedia:WikiProject Cycling/Tour de France task force yesterday, and the scope includes all cyclists that have ridden the Tour. I have added the all other articles to the task force using AWB, but can a bot save me time and bother and added the 2,290 articles in Category:Tour de France cyclists for me? What needs doing is |tdf=yes
needs adding to end of {{WikiProject Cycling}}
({{WP Cycling}}
). BaldBoris 23:03, 6 October 2016 (UTC)
- BaldBoris Seems close enough to what my bot already does, so consider it Coding.... However, how would you like it? {{WikiProject Cycling|tdf=yes''restoftheoriginaltemplate''? Also, may I convert WP Cycling to WikIProject Cycling? Dat GuyTalkContribs 05:28, 7 October 2016 (UTC)
- @DatGuy: It would need adding to the end of the template {{WikiProject Cycling|''restoftheoriginaltemplate''|tdf=yes}}. As WP Cycling is a redirect, it's not really necessary to change it. I'm just wondering if I should open a discussion before this goes ahead. It seems pretty logical to me that all participants of the Tour de France should be covered by the task force. BaldBoris 15:14, 7 October 2016 (UTC)
- @BaldBoris: Oh yes, this should probably receive a bit of wider discussion before I start to program the bot (which will be super easy). Try starting an RfC. The proposal will probably easily pass, and you notify other members of WikiProject cycling. Just remember not to canvass. Dat GuyTalkContribs 15:22, 7 October 2016 (UTC)
- I start a discussion here. I'm not sure how long it'll take before a decision is made, but it shouldn't be too long. BaldBoris 17:00, 7 October 2016 (UTC)
- @BaldBoris: Oh yes, this should probably receive a bit of wider discussion before I start to program the bot (which will be super easy). Try starting an RfC. The proposal will probably easily pass, and you notify other members of WikiProject cycling. Just remember not to canvass. Dat GuyTalkContribs 15:22, 7 October 2016 (UTC)
- @DatGuy: It would need adding to the end of the template {{WikiProject Cycling|''restoftheoriginaltemplate''|tdf=yes}}. As WP Cycling is a redirect, it's not really necessary to change it. I'm just wondering if I should open a discussion before this goes ahead. It seems pretty logical to me that all participants of the Tour de France should be covered by the task force. BaldBoris 15:14, 7 October 2016 (UTC)
Done. Dat GuyTalkContribs 19:29, 15 October 2016 (UTC)
UserStatus changing bot
It would be nice if there was a bot that would go around to the userpages of users who haven't been active after x amount of time and change their UserStatus subpage to away (and then to offline after some more time). That way users that forget to change it before going away or offline for a bit wouldn't be shown as being online when they aren't actually online. And of course, the bot would only do this to willing users who sign up for it (perhaps by adding a new parameter to the template that signals the bot?) Perhaps this is too social of a suggestion or not enough people use the template to warrant this kind of bot, but I thought I'd suggest it to see if anything comes of it. -- MorbidEntree - (Talk to me! (っ◕‿◕)っ♥)(please reply using {{ping}}) 05:26, 4 August 2016 (UTC)
- Needs wider discussion.—cyberpowerChat:Online 16:23, 4 August 2016 (UTC)
- Support - as it would be beneficial.--BabbaQ (talk) 11:57, 20 August 2016 (UTC)
- This needs to be opt-in/opt-out and should be able to take page name which it needs to change then I'll supportVarunFEB2003 11:28, 6 September 2016 (UTC)
- Support - it would be very helpful, and would not be too intrusive as long as it's opt-in. How would it adjust for differing online/offline layouts? Jshxe (talk) 19:54, 16 October 2016 (UTC)
Draft space redirects
An adminbot should create a fully protected redirect from Draft:A to A for each article A (including disambiguation pages). If Draft:A already exists, then there are three cases to consider.
- Draft:A is not a redirect. In this case, the adminbot will ignore it.
- Draft:A already redirects to A. In this case, the adminbot will fully protect it.
- Draft:A is a redirect to some target other than A. In this case, the adminbot will fully protect and retarget it to A.
63.251.215.25 (talk) 17:05, 2 September 2016 (UTC)
- What would the benefit of this bot be? Enterprisey (talk!) 17:31, 2 September 2016 (UTC)
- First off, I'm not the OP. I didn't edit while logged out. I'm also not a bot operator. Just watching this page because of my request above. Anyway, I personally wouldn't feel safe with an adminbot running around, especially if it were to malfunction. I'd feel much safer if the bot tagged a redirect and then an admin could see and fully protect it. I'm also not sure why a redirect from a draft would need to be fully protected, other than because of vandalism and edit-warring, and WP:AIV and WP:EWN already takes care of that. And they don't preemptively protect pages from vandalism and edit-warring. They only do it if it's in progress. -- Gestrid (talk) 18:47, 2 September 2016 (UTC)
- Agree with Enterprisey. Why is this mass creation of millions of redirects helpful? Pppery (talk) 20:09, 2 September 2016 (UTC)
- Needs wider discussion. Clearly controversial. ~ Rob13Talk 19:19, 30 October 2016 (UTC)
Remove DEFAULTSORT keys that are no longer needed
Now that English Wikipedia is using UCA collation for categories (phabricator:T136150), there are a large number of DEFAULTSORT keys that are no longer needed. For example, it is no longer necessary to have DEFAULTSORT keys for titles that begin with diacritics, like Über or Łódź. (Those will automatically sort under U and L now.) Someone should write a bot to remove a lot of these unneccessary DEFAULTSORT keys (for example, when the title is the same at the DEFAULTSORT key except for diacritics). Kaldari (talk) 21:40, 6 September 2016 (UTC)
- Not really. The sort keys are a useful part of the product. They also show that the matter has been attended to, and discourage people from making up incorrect sort keys. All the best: Rich Farmbrough, 21:29, 27 September 2016 (UTC).
- Needs wider discussion. In any event, this is likely to be controversial, as the sort keys are not doing active harm. This would need consensus. ~ Rob13Talk 19:18, 30 October 2016 (UTC)
Fix thousands of citation errors in accessdate
Since the recent changes in the citation templates (see Update to the live CS1 module weekend of 30–31 July 2016), the parameter access-date
now requires a day and no-longer accepts a "month-year" formatted date such as August 2016
and displays a CS 1 error (Check date values in: |access-date=) error, as soon as the article has been edited.
- See example
- I have no idea how many articles this concerns on wikipedia.
- In the last 10 months I have used this now deprecated format in several thousand citations (in about 1000 articles).
- TODO: change/fix
access-date
oraccessdate
from, for example,August 2016
to1 August 2016
, by adding the first day of the month. - Special case: if the parameter
date
contains a more recent date (e.g.4 August 2016
) than the fixed accessdate parameter (i.e1 August 2016
), the value in access-date would be older than that in date. Although accessing a cited source before its (publication) doesn't seem very sensible to me, there is (currently) no CS 1 error, so adjusting for "accessdate == date" is purely optional. - Add a
1\s
in front of the outwritten month ("August"), maintaining the original spacing, i.e. a white space between "=" and the date-value.
Adding a day to the accessdate parameter seems like a straight forward change to me. However if I am the only editor on wikipedia that used such date format, or if my request causes some kind of controversy, I'll prefer to do these changes manually. Thx for the effort, Rfassbind – talk 12:15, 4 August 2016 (UTC)
- Like this? If so, I can fix at least some portion of them.
- As for the special case, it is not unusual for access-dates to precede publication dates, since some publication dates are in the future. Putting "1" in front of the month gets us within 30 days of the actual access date, which is close enough for verification. – Jonesey95 (talk) 12:55, 4 August 2016 (UTC)
- Oppose. Only the editor who accessed the information knows on what day the information was accessed. Only the editor who added the access date should fix this so-called error. I personally see no need to fix this "problem". Jc3s5h (talk) 13:29, 4 August 2016 (UTC)
@Jc3s5h I expected this kind of unhelpful comment, and that's why I was reluctant to post this request in the first place. It's depressing sometimes, yes, but that's the way wikipedia works. @Jonesey95 yes that's a perfectly good fix. Rfassbind – talk 15:47, 4 August 2016 (UTC)
- I won't hesitate to propose a community ban against any bot that is designed to falsify information. Jc3s5h (talk) 15:56, 4 August 2016 (UTC)
- Access dates can be easily determined by looking at when the URL was added. That can then be used to reasonably extrapolate the access dates. InternetArchiveBot does the same when determining archive snapshot dates.—cyberpowerChat:Online 16:16, 4 August 2016 (UTC)
- Access dates for journal citations with DOI or other identifier values can also be removed. Per the {{cite journal}} documentation, "access-date is not required for links to copies of published research papers accessed via DOI or a published book". – Jonesey95 (talk) 18:27, 4 August 2016 (UTC)
- @Cyberpower678: Doesn't always work, particularly if text is copypasted between articles, see my post of 14:54, 4 August 2016 at User talk:Corinne#Sol Invictus. --Redrose64 (talk) 18:45, 4 August 2016 (UTC)
- Access dates for journal citations with DOI or other identifier values can also be removed. Per the {{cite journal}} documentation, "access-date is not required for links to copies of published research papers accessed via DOI or a published book". – Jonesey95 (talk) 18:27, 4 August 2016 (UTC)
- Access dates can be easily determined by looking at when the URL was added. That can then be used to reasonably extrapolate the access dates. InternetArchiveBot does the same when determining archive snapshot dates.—cyberpowerChat:Online 16:16, 4 August 2016 (UTC)
- I won't hesitate to propose a community ban against any bot that is designed to falsify information. Jc3s5h (talk) 15:56, 4 August 2016 (UTC)
We can even check whether the link is still alive and put the current date. -- Magioladitis (talk) 16:24, 4 August 2016 (UTC)
- oppose to adding day. The format should not insist on it and that should be reverted as I doubt that a well attended RfC has been held to see if there is a consnus for such a change. The change opens up a can of worms of the correct place for the day "January 16, 2016" or "16 January 2016" or 2016-01-16. The reason for access date is to help editors in the future to find an archive version of the page if necessary. A granularity of a month is sufficient for that. -- PBS (talk) 18:34, 4 August 2016 (UTC)
- The day is useful for recent events, especially for web pages that are likely to be revised. Knowing whether a site was accessed before or after a late-breaking revelation can help an editor decide whether a site should be revisited, with an eye to revising the article to incorporate the latest information. But for older sources, the day is seldom useful. Jc3s5h (talk) 19:17, 4 August 2016 (UTC)
- Then leave it up to the judgement of the editor and do not impose the day automatically with a bot. -- PBS (talk) 20:25, 13 August 2016 (UTC)
- The day is useful for recent events, especially for web pages that are likely to be revised. Knowing whether a site was accessed before or after a late-breaking revelation can help an editor decide whether a site should be revisited, with an eye to revising the article to incorporate the latest information. But for older sources, the day is seldom useful. Jc3s5h (talk) 19:17, 4 August 2016 (UTC)
- Support if narrow in scope Since Rfassbind has personally revised many 100s (or more) of these pages, if he can attest to the access-date of each reference, I see no problem adding the day corresponding to his article revisions, which only comes to light after the module update. I don't know enough about the
|access-date=
portion of the module update to have a further option yet. ~ Tom.Reding (talk ⋅dgaf) 22:44, 4 August 2016 (UTC)
- Methodology: Since Rfassbind has been consistent with his edit summaries, they can be searched for text such as "overall revision". These are all exclusively minor planet pages (as can/will be double checked as such), and all share similar online sources from which their references are built, so I have no problem taking this list, organizing it by date, and applying that date to all
|access-date=
parameters on the corresponding page (or whichever references Rfassbind confirms checking). As a further check, I'd only edit the "old" access-dates which match the corresponding month & year of the overall revision. ~ Tom.Reding (talk ⋅dgaf) 00:06, 5 August 2016 (UTC)
- Methodology: Since Rfassbind has been consistent with his edit summaries, they can be searched for text such as "overall revision". These are all exclusively minor planet pages (as can/will be double checked as such), and all share similar online sources from which their references are built, so I have no problem taking this list, organizing it by date, and applying that date to all
- Support as it would benefit the project.BabbaQ (talk) 11:57, 20 August 2016 (UTC)
- @BabbaQ How? -- PBS (talk) 19:46, 27 August 2016 (UTC)
- Oppose per Jc3s5h and Cyberpower678. Providing a source answers the question: "where exactly did you get this information from", not "where else you can probably find this information, maybe". It is a bit dishonest to change the accessdate to something that for any given day has only about 3% chance of being the actual accessdate. It's also an important principle that people who use citation templates should read the template documentation and comply with it, instead of relying on others to "fix" things for them, especially in cases such as this when we aren't mind readers. Cyberpower678's point has merits: we should limit ourselves to what archive bots do. It doesn't fix all cases, but it does not introduce what are probable mistakes either. – Finnusertop (talk ⋅ contribs) 20:16, 27 August 2016 (UTC)
- I think you misunderstand. The CS1 and CS2 templates have been updated to reject citation templates when the accessdate parameter is missing a date and only has the month and the year. The idea of this request, since there are now thousands of citation templates giving nice red errors everywhere is to have a bot add a date to these access dates to fix the error. My idea is that a bot can extrapolate the access date based on when the link is added since in 95% of the case, the link is added the same day it was initially accessed when sourcing.—cyberpowerChat:Limited Access 20:22, 27 August 2016 (UTC)
- It would be better to make the templates recognize just the year, or just the year and month, for access dates that are older than the "improvement" to the templates. Jc3s5h (talk) 21:02, 27 August 2016 (UTC)
- @Cyberpower678 where is the RFC where a substantial number of editors agreed to this change? If no such RfC was held then no bot changes to enforce it should be made; and the obvious solution is to removed the red error message. If there are thousands of them then a lot (some added by me) were added with the day date deliberately miss out, so where is the RfC justifying changing them? -- PBS (talk) 14:41, 29 August 2016 (UTC)
- I wouldn't know. I didn't make the change to the CS templates. I'm just suggesting what a bot could do to fix the red error messages.—cyberpowerChat:Online 14:44, 29 August 2016 (UTC)
- @Cyberpower678 where is the RFC where a substantial number of editors agreed to this change? If no such RfC was held then no bot changes to enforce it should be made; and the obvious solution is to removed the red error message. If there are thousands of them then a lot (some added by me) were added with the day date deliberately miss out, so where is the RfC justifying changing them? -- PBS (talk) 14:41, 29 August 2016 (UTC)
- Comment Original discussion on incomplete access dates. The CS1 forum is the correct place to discuss if a CS1 error should be generated or not. I'm afraid this discussion is deadlocked due to questions about legitimacy of the CS1 error which can't be resolved here. -- GreenC 15:07, 29 August 2016 (UTC)
- Here are the stats:
- Articles with a bad access-date: 14665
- Cites with a bad access-date: 32255
- Full list available on request (from the 8/20/2016 database) -- GreenC 17:35, 29 August 2016 (UTC)
- Here are the stats:
- Support—and base the day added on the day the edit was made, not the first or last of the month. Major style guides specify that an access date needs to be complete to the day, and so should we. I would also add a piece of logic so that the date added can't be earlier than the publication date if a full date is specified there, for obvious reasons. Imzadi 1979 → 22:47, 30 August 2016 (UTC)
- Oppose and change the citation template back to accepting month and year. There is no valid reason for it to not accept it. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 00:48, 31 August 2016 (UTC)
- Support because this will never be reverted in the cite templates for the very sane and reasonable reason that all style guides required access dates to be complete to the day. So basically, per Imzadi. This obviously needs to be based on the revision where the link was first added, so there needs to be some extensive testing do deal with reverts and vandalism. Blindly putting the date to the first of the month, however, is unnacceptable.Headbomb {talk / contribs / physics / books} 12:19, 8 September 2016 (UTC)
- Support Full dates have been in the style guidelines since 2006 (see links by Trappist the Monk). We should follow the documentation/guidelines. If this was a new guideline recently added I could understand, but it's not new. -- GreenC 01:28, 10 September 2016 (UTC)
- Support per Green Cardamom. I just don't see any problem with fixing the date relative to the style guide. It would remove unsightly error messages and would be accurate to within 30 days of the access date. --RuleTheWiki (talk) 11:08, 14 October 2016 (UTC)
- Oppose (with possible compromise): I've worked through hundreds of these now by hand. Besides the accuracy problems pointed out above with blindly adding "1", there have been dozens of other problems I've found in the references that I've fixed. Of course, dead URLs are the most common, but there have been incorrect titles/authors, incomplete URLs, URLs that are moved, incorrect references - just about every problem you can think of in a reference. While working through the mountain is a large task, and there are certainly some similar pages (so far, I've seen both the minor planet pages and the Gitmo detainee pages) that could benefit from limited bot help, I think the overall improvement to references is worth some short-term red warnings. Though, if someone wants to change the incomplete date warnings to a separate green category and just leave the other date errors red, I'd strongly support that. Floatjon (talk) 15:10, 17 October 2016 (UTC)
- Needs wider discussion. This just isn't the place to hold this discussion. Advertise it at a village pump with an RfC and then return here after its been closed. ~ Rob13Talk 19:22, 30 October 2016 (UTC)
Fix references to images and other elements as being on the left or right
Per Wikipedia:Manual of Style/Accessibility#Images #6, it is not appropriate to refer in article text to images and elements as being on the "left" or "right" side of the page, since this information is inaccurate for mobile users and irrelevant for visually impaired users. We should have a bot make the following substitutions:
"the <picture|diagram|image|table|box|...> <to|at|on> [the] <left|right>"
→ "the adjacent <picture|diagram|image|table|box|...>"
Thoughts? —swpbT 19:00, 2 November 2016 (UTC)
- Not a good task for a bot. Very much a WP:CONTEXTBOT. Consider, for example, "There are two boxes in this image. The box on the left is red, while the box on the right is blue." Anomie⚔ 20:56, 2 November 2016 (UTC)
- Withdrawn. Will pursue as an AWB task instead. —swpbT 16:48, 3 November 2016 (UTC)
ReminderBot
I request an on-Wiki bot (way) to remind tasks. "Remind me in N days about "A" etc. Talk page message reminder or anything is okay. --Tito Dutta (talk) 17:09, 9 February 2016 (UTC)
- See previous discussions at Wikipedia:Village pump (technical)/Archive 143#Reminderbot? and Wikipedia:Bot requests/Archive 37#Reminder bot. It needs more definition as to how exactly it should work. Anomie⚔ 17:22, 9 February 2016 (UTC)
- This may work in the following way:
- a) a user will add tasks in their subpage User:Titodutta/Reminder in this format {{Remind me|3 days}}. The bot will remind on the user talk page.
- b) Anomie in an discussion, one may tag something like this {{Ping|RemindBot|3 days}}.
Please tell me your views and opinion. --Tito Dutta (talk) 18:31, 10 February 2016 (UTC)
- Outside of a user subpage, how will the bot know who to remind - i.e. how can it be done so that other editors aren't given reminders, either accidentally or maliciously? - Evad37 [talk] 22:40, 10 February 2016 (UTC)
- I don't know if a bot can do it. {{ping}} manages to do this right. When you get a ping, the notification tells you who it is from, so we can see that it keeps track somehow (signature?). I realize that ping is deeper into MW than a bot, but personally, I wouldn't use a reminder system that requires me to maintain a separate page. {{ping}} is useful exactly because you can do it in context and inline. Before ping, you could just manually leave a note at someone's page but the benefits of ping are clear to everyone. I draw the same parallels between a manual reminder system and the proposed {{remind}}. Regards, Orange Suede Sofa (talk) 22:49, 10 February 2016 (UTC)
- Yes, being able to leave reminders on any page will make it more useful – but how can it be done in a way that isn't open for abuse? - Evad37 [talk] 23:23, 10 February 2016 (UTC)
- Maybe this is a better way to think about it: A reminder could be little more than a ping to oneself after a delayed period of time. Ping doesn't suffer from forgery issues (you can't fake a ping from someone else) and reminders could be restricted to ping only oneself (so that you can't spam a bunch of people with reminders). But as I allude to above, ping is part of mediawiki so I imagine that it has special ways of accomplishing this that a bot can't. I think that this discussion is becoming unfortunately fragmented because this is a bot-focused board. I think I was asked to join the discussion here because I previously proposed this on WP:VP/T and was eventually pointed to meta. Regards, Orange Suede Sofa (talk) 03:09, 11 February 2016 (UTC)
- Agree; this is a potentially useful idea (although outside reminder software can always suffice), and might make sense as a MediaWiki extension, but if we did it by bot it would end up being a strange hack that would probably have other issues. — Earwig talk 03:12, 11 February 2016 (UTC)
- Maybe this is a better way to think about it: A reminder could be little more than a ping to oneself after a delayed period of time. Ping doesn't suffer from forgery issues (you can't fake a ping from someone else) and reminders could be restricted to ping only oneself (so that you can't spam a bunch of people with reminders). But as I allude to above, ping is part of mediawiki so I imagine that it has special ways of accomplishing this that a bot can't. I think that this discussion is becoming unfortunately fragmented because this is a bot-focused board. I think I was asked to join the discussion here because I previously proposed this on WP:VP/T and was eventually pointed to meta. Regards, Orange Suede Sofa (talk) 03:09, 11 February 2016 (UTC)
- Yes, being able to leave reminders on any page will make it more useful – but how can it be done in a way that isn't open for abuse? - Evad37 [talk] 23:23, 10 February 2016 (UTC)
- I don't know if a bot can do it. {{ping}} manages to do this right. When you get a ping, the notification tells you who it is from, so we can see that it keeps track somehow (signature?). I realize that ping is deeper into MW than a bot, but personally, I wouldn't use a reminder system that requires me to maintain a separate page. {{ping}} is useful exactly because you can do it in context and inline. Before ping, you could just manually leave a note at someone's page but the benefits of ping are clear to everyone. I draw the same parallels between a manual reminder system and the proposed {{remind}}. Regards, Orange Suede Sofa (talk) 22:49, 10 February 2016 (UTC)
- It would be great if we have this. User:Anomie, any comment/question? --Tito Dutta (talk) 23:48, 17 February 2016 (UTC)
- How would a bot go about finding new reminder requests in the most efficient way? The Transhumanist 01:11, 18 February 2016 (UTC)
- The Transhumanist, what if we pinged the bot instead? So, for instance, I could say
{{u|ReminderBot}}
at the end of something, and the bot would be pinged and store the ping in a database. Later on, the bot could leave a message on my talkpage mentioning the original page I left the ping in. Enterprisey (talk!) (formerly APerson) 04:10, 20 June 2016 (UTC)
- The Transhumanist, what if we pinged the bot instead? So, for instance, I could say
- How would a bot go about finding new reminder requests in the most efficient way? The Transhumanist 01:11, 18 February 2016 (UTC)
- Agree this would be badass. I sometimes forget in-progress article or template work for years, after getting distracted by something else. — SMcCandlish ☺ ☏ ¢ ≽ʌⱷ҅ᴥⱷʌ≼ 19:02, 19 February 2016 (UTC)
- I love this idea. I think the obvious implementation of this would be to use a specialized template where the editor who places the template receives a talk page message reminding them after the specified number of days/weeks, etc. The template could have a parameter such as "processed" that's set to "yes" after the bot has processed the request. A tracking category of all transclusions without the parameter set to the appropriate value would be an efficient method of searching. ~ RobTalk 02:01, 31 March 2016 (UTC)
- @BU Rob13: Am going to code this with Python later on. PhilrocMy contribs 15:02, 19 April 2016 (UTC)
- @BU Rob13: By the way, would you want the user to input the thing they wanted to be reminded about too? PhilrocMy contribs 15:02, 19 April 2016 (UTC)
- Philroc, I'm not Rob, but yeah, I think that would be a great feature to have. APerson (talk!) 02:31, 3 May 2016 (UTC)
- I love this idea. I think the obvious implementation of this would be to use a specialized template where the editor who places the template receives a talk page message reminding them after the specified number of days/weeks, etc. The template could have a parameter such as "processed" that's set to "yes" after the bot has processed the request. A tracking category of all transclusions without the parameter set to the appropriate value would be an efficient method of searching. ~ RobTalk 02:01, 31 March 2016 (UTC)
- For the record, I've started working on this - at the moment, I'm waiting for this Pywikibot patch to go through, which'll let me access notifications. Enterprisey (talk!) (formerly APerson) 19:59, 23 June 2016 (UTC)
- Patch went through, so I can start working on this now. Enterprisey (talk!) (formerly APerson) 03:35, 29 June 2016 (UTC)
- Gotta keep this thread alive! Unbelievably, Pywikibot had another bug, so I'm waiting for this other one to go through. Enterprisey (talk!) (formerly APerson) 00:56, 2 July 2016 (UTC)
- Patch went through, so I can start working on this now. Enterprisey (talk!) (formerly APerson) 03:35, 29 June 2016 (UTC)
- Status update: Coding... (code available at https://github.com/APerson241/RemindMeBot) Enterprisey (talk!) (formerly APerson) 04:53, 5 July 2016 (UTC)
- Status update 2: BRFA filed. Requesting comments from Tito Dutta, Evad37, SMcCandlish, and Philroc. Enterprisey (talk!) (formerly APerson) 04:30, 7 July 2016 (UTC)
- @Enterprisey: What happened? I was waiting for it to go live and you......never tried it! Can we have a another BRFA filed and have it go live soon! Please {{Alarm Clock}} is one useless bit of..... Don't you agree Xaosflux VarunFEB2003 I am Online 14:21, 21 August 2016 (UTC)
- The BRFA expired as there was no action, however it may be reactivated in the future if the operator wishes. — xaosflux Talk 14:27, 21 August 2016 (UTC)
- I was having a few issues with the Echo API. I'll continue privately testing (testwiki, of course) and if it starts looking promising, I'll reopen the BRFA. Enterprisey (talk!) (formerly APerson) 18:22, 21 August 2016 (UTC)
- Great! VarunFEB2003 I am Offline 14:37, 22 August 2016 (UTC)
- @Enterprisey: How is this goin? The Transhumanist 02:10, 12 September 2016 (UTC)
- To be honest, I haven't looked at this since my last response to Varun. I've been a bit busy IRL, and I don't exactly have extravagant amounts of time to devote to my projects here. I'm still thinking about this, though. On Phab, someone's proposed this problem as a potential Outreachy/mentorship thing, which I fully support - if something comes of that, we won't have to worry about this bot-based solution any more. Until then, however, I'll keep working. Enterprisey (talk!) 02:12, 12 September 2016 (UTC)
- @Enterprisey: What happened? I was waiting for it to go live and you......never tried it! Can we have a another BRFA filed and have it go live soon! Please {{Alarm Clock}} is one useless bit of..... Don't you agree Xaosflux VarunFEB2003 I am Online 14:21, 21 August 2016 (UTC)
Coordinates format RfC: Infobox park
Per this RfC (see Help:Coordinates in infoboxes), could all articles using {{Infobox park}} which are also in Category:Pages using deprecated coordinates format be run through with AWB (minor fixes turned on) with this regex (entire text, case-sensitive, other options default)? This should affect roughly 2,000 pages (with no visual changes, aside from the minor fixes).
Find:
*\| ?lat_d *= ?([\-0-9\. ]+)(\n? *\| ?lat_m *= ?([0-9\. ]*))?(\n? *\| ?lat_s *= ?([0-9\. ]*))?(\n? *\| ?lat_NS *= ?([NnSs]?) ?)?\n? *\| ?long_d *= ?([\-0-9\. ]+)(\n? *\| ?long_m *= ?([0-9\. ]*))?(\n? *\| ?long_s *= ?([0-9\. ]*))?(\n? *\| ?long_EW *= ?([EeWw]?) ?)?(\n? *\| ?region *= ?(.*) ?)?(\n? *\| ?dim *= ?(.*) ?)?(\n? *\| ?scale *= ?(.*) ?)?(\n? *\| ?source *= ?(.*) ?)?(\n? *\| ?format *= ?(.*) ?)?(\n? *\| ?display *= ?(.*) ?)?(\n? *\| ?coords_type *= ?(.*) ?)?(\n? *\| ?coords *= ?.*)?
(There's a space at the beginning.)
Replace:
| coords = {{subst:Infobox coord/sandbox | lat_d = $1 | lat_m = $3 | lat_s = $5 | lat_NS = $7 | long_d = $8 | long_m = $10 | long_s = $12 | long_EW = $14 | region = $16 | dim = $18 | scale = $20 | source = $22 | format = $24 | display = $26 | type = $28 }}
Thanks, Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 15:22, 31 August 2016 (UTC)
- (Pinging Mandruss and Jonesey95. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 15:23, 31 August 2016 (UTC))
- @Jc86035: Are you sure about
|type=$28
? That parameter is not deprecated in Infobox park. ―Mandruss ☎ 16:25, 31 August 2016 (UTC)- Are there sample edits that show a few articles in which this regex replacement has already been done? – Jonesey95 (talk) 18:21, 31 August 2016 (UTC)
- @Mandruss and Jonesey95: The
|type=
is the parameter in {{Infobox coord/sandbox}} (substituted to create {{Coord}}). The parameter|coords_type=
of Infobox park is put into it. I've done the replacement on 11 infoboxes (example, example), but without the rounding for latitude and longitude (which I have yet to test properly). Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 01:31, 1 September 2016 (UTC)- @Jc86035: More things I don't understand. What is the rounding you refer to? Are you altering coordinates precision? And in your first example you are switching from signed decimal to unsigned, is that your intent? ―Mandruss ☎ 01:51, 1 September 2016 (UTC)
- @Mandruss: The precision in many coordinates – 7 digits – is rather high for parks, which are generally wider than 10 centimetres. Because the input has always been put through {{Infobox coord}} (I'm just substituting a variation with comments and empty parameters removed), there aren't any visual changes. I used Infobox coord as a wrapper because I didn't want to break anything in current uses. —Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 02:04, 1 September 2016 (UTC)
- Rounding improved to keep zeroes on at the end. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 02:14, 1 September 2016 (UTC)
- Very not happy with bot decreasing precision. Mill Ends Park --Tagishsimon (talk) 02:17, 1 September 2016 (UTC)
- @Tagishsimon: Well we could always take
|area=
into account, but the vast majority of parks don't need that level of precision. I'll build it in at some point. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 02:21, 1 September 2016 (UTC) - Also, that one doesn't need conversion since it already uses
|coords=
. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 02:23, 1 September 2016 (UTC)- @Jc86035: WP:COORDPREC suggests 5 d.p. for objects between about 0–37° latitude and about 8–75 m. If we're going to blindly reduce precision, I don't think we should go fewer than 5 d.p. for parks. I assume we're never going to increase precision.
If you at some point take area into account, the only reasonable object size for this purpose would be the sqrt of area. Object size is always one-dimensional, which is why COORDPREC states it as m and km, not m2 and km2. ―Mandruss ☎ 02:57, 1 September 2016 (UTC)
- @Jc86035: WP:COORDPREC suggests 5 d.p. for objects between about 0–37° latitude and about 8–75 m. If we're going to blindly reduce precision, I don't think we should go fewer than 5 d.p. for parks. I assume we're never going to increase precision.
- @Tagishsimon: Well we could always take
- Very not happy with bot decreasing precision. Mill Ends Park --Tagishsimon (talk) 02:17, 1 September 2016 (UTC)
- Rounding improved to keep zeroes on at the end. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 02:14, 1 September 2016 (UTC)
- @Mandruss: The precision in many coordinates – 7 digits – is rather high for parks, which are generally wider than 10 centimetres. Because the input has always been put through {{Infobox coord}} (I'm just substituting a variation with comments and empty parameters removed), there aren't any visual changes. I used Infobox coord as a wrapper because I didn't want to break anything in current uses. —Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 02:04, 1 September 2016 (UTC)
- @Jc86035: More things I don't understand. What is the rounding you refer to? Are you altering coordinates precision? And in your first example you are switching from signed decimal to unsigned, is that your intent? ―Mandruss ☎ 01:51, 1 September 2016 (UTC)
- @Mandruss and Jonesey95: The
- Are there sample edits that show a few articles in which this regex replacement has already been done? – Jonesey95 (talk) 18:21, 31 August 2016 (UTC)
- @Jc86035: Are you sure about
@Mandruss, Jonesey95, and Tagishsimon: Rounding removed; probably wouldn't work in retrospect. I've already tested this configuration, so it should work. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 10:28, 4 September 2016 (UTC)
- Very good; thanks. We should probably do an exercise on precision sometime, but sensible to keep things as simple as they can as we progress the use of coord in infoboxes. --Tagishsimon (talk) 11:40, 4 September 2016 (UTC)
@Jc86035: I can write something up for this. Using an AWB custom module will be more flexible than using the regex above. Since the template conversions will be similar, I would like to file one BRFA for all of the templates that need to be converted. For each infobox, I'll just need a map of the parameters into {{subst:Infobox coord/sandbox}} if they differ from the above. (I don't need them all now, just when the template is ready.) — JJMC89 (T·C) 10:21, 5 September 2016 (UTC)
- @JJMC89: I'd prefer doing it in batches since this is likely to take at the very least four months (if no one decides to help), but I don't really mind doing it in one go. We also probably to make another wrapper based on {{Geobox coor}}, since many infoboxes use that. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 10:28, 5 September 2016 (UTC)
- @Jc86035: That will be fine. One BRFA doesn't mean that they all need to be done at once. It just means that I will have approval to run the bot for all of them. Each infobox can then be processed once the conversion is done. — JJMC89 (T·C) 10:33, 5 September 2016 (UTC)
- @Jc86035, Jonesey95, and Mandruss: BRFA filed — JJMC89 (T·C) 22:45, 5 September 2016 (UTC)
Fix redirects (specific task)
Could a bot replace all instances of these wrong redirects (where they are used in articles) with their targets? --XXN, 10:20, 15 October 2016 (UTC)
- Isn't there already a bot that does this? Dat GuyTalkContribs 10:22, 15 October 2016 (UTC)
- Double redirects are fixed on regular basis, but these are "normal" redirects, so not sure if anyone fixes them. At least, few time ago an user has reported that some of these redirects are used (have incoming links). --XXN, 10:34, 15 October 2016 (UTC)
- ... is doing manually... --XXN, 21:01, 18 October 2016 (UTC)
- Done. XXN, 21:04, 19 October 2016 (UTC)
- ... is doing manually... --XXN, 21:01, 18 October 2016 (UTC)
- Double redirects are fixed on regular basis, but these are "normal" redirects, so not sure if anyone fixes them. At least, few time ago an user has reported that some of these redirects are used (have incoming links). --XXN, 10:34, 15 October 2016 (UTC)
New task
Could a bot replace all instances of these wrong redirects (where they are used in articles) with their targets? Then I'll go to RFD with them. --XXN, 21:04, 19 October 2016 (UTC)
- @XXN: Is this still applicable? Do you believe there are over 500 pages? --Dat GuyTalkContribs 19:16, 26 October 2016 (UTC)
- Yep. Ran a query on DB: there are 416 unique bad links in 287 unique pages, though there may be more than one unique bad link on page and more than one instance of the same link per page. --XXN, 21:36, 26 October 2016 (UTC)
- Is a bot needed here, or should it be done manually? Pinging Xaosflux since he helped me with my BRFAs. Dat GuyTalkContribs 21:40, 26 October 2016 (UTC)
- @DatGuy: isn't there already a bot that processes RfD deletes? Why would these need to be delinked first - THEN brought to RfD? Just take them to RfD now. — xaosflux Talk 22:58, 26 October 2016 (UTC)
- Hold the RfD first. The thing is, if you delink first, you are pre-empting the outcome of a (potential) RfD, which is an abuse of process. --Redrose64 (talk) 23:24, 26 October 2016 (UTC)
- @Xaosflux and Redrose64: After redirects are tagged for deletion, their redirect function gets broken and they become simple short pages. In a RFD previous discussion some users complained about "who is going to fix all the redlinks that will be created" by deletion of the listed pages. I don't think there is a bot that fixes in articles such redirects deleted at RFD. At this moment it's easier to write a bot for replacing such redirects, than after they will be tagged or deleted. So this is why I came here with this request. --XXN, 09:36, 27 October 2016 (UTC)
- Hold the RfD first. The thing is, if you delink first, you are pre-empting the outcome of a (potential) RfD, which is an abuse of process. --Redrose64 (talk) 23:24, 26 October 2016 (UTC)
- @DatGuy: isn't there already a bot that processes RfD deletes? Why would these need to be delinked first - THEN brought to RfD? Just take them to RfD now. — xaosflux Talk 22:58, 26 October 2016 (UTC)
- Is a bot needed here, or should it be done manually? Pinging Xaosflux since he helped me with my BRFAs. Dat GuyTalkContribs 21:40, 26 October 2016 (UTC)
- Yep. Ran a query on DB: there are 416 unique bad links in 287 unique pages, though there may be more than one unique bad link on page and more than one instance of the same link per page. --XXN, 21:36, 26 October 2016 (UTC)
--XXN, 17:04, 30 October 2016 (UTC)
- Some redirects were fixed in tens of pages at once by fixing them in templates.
- All done now. --XXN, 21:10, 5 November 2016 (UTC)