Jump to content

Wikipedia:Bot requests/Archive 78

From Wikipedia, the free encyclopedia
Archive 75Archive 76Archive 77Archive 78Archive 79Archive 80Archive 85

Creating redirects from values in a list of episodes article

Hello and thanks in advance for your time. Is it possible for a bot to take a list of list-articles, with articles such as List of House episodes, go to the episode section of an article from that list, and then get all values in the "title" column? Basically the name of each episode. If that is a yes, can the bot then check if an article at that name is present or not? Finally creating a redirect based on that article name. So for example:

  1. Bot gets a list of articles;
  2. It goes to the first article in the list - List of Arrow episodes;
  3. Goes to the episode section - List of Arrow episodes#Episodes;
  4. Goes over the episode list. At episode #2 gets the title "Honor Thy Father";
  5. Checks if Honor Thy Father is an article;
  6. If article (or redirect) present then create a redirect at "title (TV show)" (as: "Honor Thy Father (Arrow)"), if not then output to list as "title" (as: "Honor Thy Father").


My goal is to be able to create episode redirects fast and easy, so trying to figure out how best to do it, as manually this is taking me a very long time (there are a few more steps, but would like to know if the general idea is even possible). --Gonnym (talk) 08:01, 24 October 2018 (UTC)

More likely would be for the bot to look for the {{Episode list}} templates in the wikitext, rather than trying to scrape the HTML. But first you'd need a consensus at WP:VPR or the like establishing that the community actually wants all these redirects. Anomie 11:06, 24 October 2018 (UTC)
Are you sure I need to get a consensus for something that seems to already have consensus? Category:Redirected episode articles lists over 13k redirects and redirected episodes have their own redirect template. --Gonnym (talk) 11:24, 24 October 2018 (UTC)
Mass-creating of stuff by bots tends to be more controversial than humans doing it. Anomie 11:42, 24 October 2018 (UTC)
@Gonnym:It is part of the bot policy Wikipedia:Bot_policy#Mass_article_creation, unless it's just a few pages. {{Episode list}} is in 11923 articles. This could be many, many thousands of new redirects. Ronhjones  (Talk) 17:49, 24 October 2018 (UTC)
Redirect creation is typically much less controversial than full article creation, but that is a LOT of redirects and it would be nice to not have to update them every 2 weeks because someone though 'wouldn't it be nice if...' or 'could we do this instead...'. Feedback from WT:AST would be useful, since they have created a crap ton of systematic redirects, and devised templates like {{NASTRO comment}}. Headbomb {t · c · p · b} 01:29, 8 December 2018 (UTC)

Report on Lunar Nomenclature by the Working Group of Commission 17 of the IAU

Centralize the ~1400+ instances of references ({{cite journal ...}}) to the "Report on Lunar Nomenclature by the Working Group of Commission 17 of the IAU" by replacing them with a single template (named e.g. Template:R:LunarNomenclature). The contents of the latter should be:
{{cite journal |last1=Menzel |first1=Donald H. |authorlink1=Donald Howard Menzel |last2=Minnaert |first2=Marcel |authorlink2=Marcel Minnaert |last3=Levin |first3=Boris J. |last4=Dollfus |first4=Audouin |authorlink4=Audouin Dollfus |last5=Bell |first5=Barbara |title=Report on Lunar Nomenclature by the Working Group of Commission 17 of the IAU |doi=10.1007/BF00171763 |journal=Space Science Reviews |volume=12 |issue=2 |pages=136–186 |date=1971 |bibcode=1971SSRv...12..136M |ref=harv }}
yielding:
Menzel, Donald H.; Minnaert, Marcel; Levin, Boris J.; Dollfus, Audouin; Bell, Barbara (1971). "Report on Lunar Nomenclature by the Working Group of Commission 17 of the IAU". Space Science Reviews. 12 (2): 136–186. Bibcode:1971SSRv...12..136M. doi:10.1007/BF00171763. {{cite journal}}: Invalid |ref=harv (help)
Urhixidur (talk) 14:43, 27 October 2018 (UTC)

What would be the purpose. We the community and external parties have tools and bots designed to work with CS1|2 ({{cite journal}}). So many things run on this system. Shortcut templates can create more problems then they solve. -- GreenC 06:00, 9 December 2018 (UTC)

Bypassing redirects in election navboxes

Recently, consensus was reached to move all election and referendum articles to have the year at the front. A bot, TheSandBot, was created to move the articles (approximately 35000) to the new titles. However, the bot did not change navboxes to use the new format. Per WP:BRINT, redirects from navigational templates should be bypassed to allow readers to see which page they are on in the template. This is a lot of simple work which would have to be doe by humans, if a bot were not created. Danski454 (talk) 15:15, 9 December 2018 (UTC)

@Danski454: Number 57 has volunteered to do what is necessary, but it is also worth noting that there is another bot already doing this task (the name just escapes me at the moment). --TheSandDoctor Talk 07:48, 11 December 2018 (UTC)
@Danski454: As you can see from my contributions, I'm about midway through this task. I'm not sure it could be done by a bot as there are a few oddities that need checking individually. Cheers, Number 57 08:34, 11 December 2018 (UTC)

Creating redirect to main userpage from subpages.

Y Done - If anyone wants User:RF1 Bot to run for you aswell let me know.

I'd like to have a bot that once a month creates a page at this months subpage for Talk Archives like here User:RhinosF1/Archives_2018/10_(October) And redirects to my main user page.

How would it be coded?

Happy to run it semi-automatic and monitored. Would not run outside my mainspace.

RhinosF1 (talk) 14:43, 16 December 2018 (UTC)

Changed URL to wikilink to avoid mobile link. Primefac (talk) 14:44, 16 December 2018 (UTC)
I personally see zero reason for a bot to do this - you don't need a User: page that corresponds to a User_talk: page, especially when it's a user subpage that is just a redirect to the userpage itself. Primefac (talk) 14:46, 16 December 2018 (UTC)
@Primefac: Again, I like having my userspace like this, I'm pretty decent at python but am not sure how to use APIs to do this or where to start. If somebody could give me some example foundation code that would be excellent. RhinosF1 (talk) 15:03, 16 December 2018 (UTC)
I'm not saying you can't set up your userspace like this, I'm saying that there's not much of a reason to do so. If there's a consensus that says a user can create a bot that will edit once a month so that a pointless redirect can be created, then it might pass WP:BOTREQUIRE. This, I suppose, is the purpose of this thread, but if there is consensus against this task (based on this thread) then chances are you won't be able to get your bot. Primefac (talk) 16:05, 16 December 2018 (UTC)
Not sure why this is being done either, but couldn't RhinosF1 simply manually create the redirects ahead of time for the next year, no bot required. -- GreenC 16:14, 16 December 2018 (UTC)
If you are decent at python you should look into mw:API:Client code#Python. That said, your bot would need to be approved at WP:BRFA. No comment here about the socio-political feasibility. --Izno (talk) 16:15, 16 December 2018 (UTC)
Note WP:BOTUSERSPACE: "any bot or automated editing process that affects only the operator's or their own userspace (user pages, user talk pages, user's module sandbox pages and subpages thereof), and which are not otherwise disruptive, may be run without prior approval." Chances are that one redirect per month isn't going to be disrupting things. Anomie 02:38, 17 December 2018 (UTC)
I thought I had commented earlier, but I agree with Anomie that this is most likely covered by WP:BOTUSERSPACE and therefore wouldn't be a problem. --TheSandDoctor Talk 17:34, 17 December 2018 (UTC)
Thanks for your support, I've never used APIs before, does anyone have any example code for creating a page with a redirect. RhinosF1 (talk) 17:48, 17 December 2018 (UTC)
@RhinosF1: What programming language(s) are you familiar with? I do most of my work here in Python. If you give me specific details of what it is to do etc, I will happily do it for you (if you want) and then you can learn off of the code. For the next couple of weeks, I have a decent amount of free time on my hands. --TheSandDoctor Talk 18:14, 17 December 2018 (UTC)
I use python 2.7 at home RhinosF1 (talk) 18:18, 17 December 2018 (UTC)
@RhinosF1: Then I would recommend checking out mwclient as that should cover your needs, but it is only in 3.0+ if I recall correctly. As I said above, if you want I can make it for you and then link you the code for future reference. You could also check out my repositories as they are all relevant (particularly this one). --TheSandDoctor Talk 18:37, 17 December 2018 (UTC)
If you're happy to, making it would be great as I've never done anything like it before. RhinosF1 (talk) 19:00, 17 December 2018 (UTC)
@RhinosF1: Just want to make sure that this is clear before going ahead with anything: So every month you want it to create User:RhinosF1/Archives_YEAR/MO_NUM_(MO_NAME), which redirects to your user page? --TheSandDoctor Talk 19:23, 17 December 2018 (UTC)
Nearly, just like https://en.m.wikipedia.org/wiki/User:RhinosF1/Archives2018/12_(December)
That User:RhinosF1/ArchivesYYYY/MM_(MONTH) redirecting to User:RhinosF1
Thanks,
RhinosF1 (talk) 19:31, 17 December 2018 (UTC)
I've just create an account for it, User:RF1_Bot. Use it's sandbox if you want and any other subpages you need for source code etc. feel free to create RhinosF1 (talk) 19:50, 17 December 2018 (UTC)
@RhinosF1: Here, though you are going to need to install mwclient via pip. If you would rather that I run it, you may email me the bot account's login info. That said, if you do choose that option, though I would never attempt anything, I would strongly recommend making sure that its password is unique for best security practice purposes. This is critical with an account acting as a bot, regardless of not being flagged. I have commented where the code itself needs to be changed in order to function and produce the desired range. It would be simpler just to make a year or two's worth at once, and that is how the code has been set up. --TheSandDoctor Talk 01:12, 18 December 2018 (UTC)
I'm happy running it myself, thanks for the help. Am I definitely safe to run without BRFA approval?
RhinosF1 (talk) 06:36, 18 December 2018 (UTC)
@RhinosF1: From WP:BOTUSERSPACE: "In addition, any bot or automated editing process that affects only the operator's or their own userspace (user pages, user talk pages, user's module sandbox pages and subpages thereof), and which are not otherwise disruptive, may be run without prior approval.". You are good so long as you don't go crazy running it tons. You will also need to update the call_home method to reflect your bot's username and whether or not you wish for such a method. If you have any questions about the code or operating of a bot, please feel free to let me know (if so, ping please). --TheSandDoctor Talk 08:45, 18 December 2018 (UTC)

It's showing an error :AssertUserFailedError: By default, mwclient protects you from accidentally editing without being logged in. If you actually want to edit without logging in, you can set force_login on the Site object to False. RhinosF1 (talk) 15:58, 18 December 2018 (UTC)

@RhinosF1: Because you need to change the login details in credentials.txt to those of your bot instead of "BOT" and "PASS". --TheSandDoctor Talk 16:37, 18 December 2018 (UTC)
@TheSandDoctor:I have RhinosF1 (talk) 16:42, 18 December 2018 (UTC)
Hello. Please check your email; you've got mail!
It may take a few minutes from the time the email is sent for it to show up in your inbox. You can remove this notice at any time by removing the {{You've got mail}} or {{ygm}} template.
@RhinosF1: Oh, yeah. Sorry. Where it has try: pass in main(), remove "pass" and uncomment the site.login bit. It should then work. When I was testing, didn't want to actually edit so I removed that and forgot to put it back before pushing. --TheSandDoctor Talk 16:58, 18 December 2018 (UTC)
Check the repo for what I mean, I have pushed the correct version. Just take that snippet and replace the line in yours. --TheSandDoctor Talk 17:00, 18 December 2018 (UTC)
That has just worked in test. Had to add a delay to stop rate limiting but apart from that fine. RhinosF1 (talk) 17:08, 18 December 2018 (UTC)
@RhinosF1: Awesome! I'm glad I could help. As for the rate limiting, that is something that happens for non-bot flagged accounts. I am glad that you were able to add a delay easily. --TheSandDoctor Talk 21:25, 18 December 2018 (UTC)
@TheSandDoctor:For a 'bot' flag, do I need to go through BRfA RhinosF1 (talk) 21:27, 18 December 2018 (UTC)
@RhinosF1: Yes and you would also need a valid reason for the flag (ie moving ~40 thousand pages, creating ~40 thousand redirects). --TheSandDoctor Talk 21:53, 18 December 2018 (UTC)

Automatic US congressional Election Result Updating bot ?

It's surprising that given how long Wikipedia has been around and how easily automatable the task is that no bot exists to automatically update US congressional district pages which are almost uniformly a mess. Theirs exists no template for how to present results with some going in reverse chronological order than other pages and there being zero consistency in presentation many pages haven't been updated since 2014.

Going through and manually editing all 435 pages would be extremely tedious so the most logical solution is to create a bot dedicated to the task, which can not only update the pages but fix them.

The quality of the result section of congressional pages is abysmal and is easy to fix, simply create a standard congressional district page standard for displaying the results and then create a bot to automatically generate election templates and them to the page following the template. I'm a bit of newb so I don't know how exactly we would go around agreeing to a standard page but I'm sure there is a process.

I would be open to coding the bot myself if somebody more experienced with them is willing to offer help/assistance.

Some example of poor quality pages:

-- — Preceding unsigned comment added by Zubin12 (talkcontribs)

This might be better done as a Lua template with data files anyone can edit and template options anyone can modify. But the data still has to be entered so there is no savings of labor or guarantee of staying up to date, unless someone made a bot to pull data from external sources into the Lua tables. The benefit would be consistent display. The downsides would be a system working outside normal wikisource which creates other complications. All this is possible but not simple. -- GreenC 16:26, 18 December 2018 (UTC)
Given that their are API's that allow one to automatically access election information, it would seem pretty simple to create template and have the script iterate over every district. Are their any good places to learn more about Lua templates on Wikipedia ?Zubin12 (talk) 01:16, 19 December 2018 (UTC)
Wikipedia:Lua is the start. You cannot access external APIs from Lua. --Izno (talk) 02:21, 19 December 2018 (UTC)
If Lua doesn't allow external API's then what exactly is wrong with using a pywikibot ? Zubin12 (talk) 03:15, 19 December 2018 (UTC)
(edit conflict) Not possible to pull external API data via Lua (except from Wikidata). It would require a bot to get the external API data then update a Lua data file. Similar to Module:Calendar date which reads from the data file Module:Calendar date/Events. GreenC 02:25, 19 December 2018 (UTC)
This kind of problem might be fixable using Wikidata or commons:Commons:Data tables. --Izno (talk) 02:21, 19 December 2018 (UTC)
You might mean Template:Wikidata list which could work but still need to populate Wikidata somehow. -- GreenC 02:29, 19 December 2018 (UTC)
No, I don't. But yes, the work would be in populating Wikidata (for the former suggestion). --Izno (talk) 02:32, 19 December 2018 (UTC)
Ok then I don't know what you mean, commons:Commons:Data tables link doesn't work. -- GreenC 02:38, 19 December 2018 (UTC)
I'm not sure where exactly Izno was trying to link to, but mw:Help:Tabular Data may be relevant. Anomie 03:29, 19 December 2018 (UTC)
That's the one. --Izno (talk) 03:49, 19 December 2018 (UTC)
I don't understand, are you saying that the data should first be uploaded to wikidate and then used by a bot? Zubin12 (talk) 03:15, 19 December 2018 (UTC)
A bot would first put the data somewhere convenient (Wikidata/Commons) and then we could include those data using a template here. --Izno (talk) 03:49, 19 December 2018 (UTC)
To summarize a number of ways to store and access the data:
In all three cases the data would require a bot keep it in sync with the remote API. -- GreenC 03:53, 19 December 2018 (UTC)
Thanks, It would seem like using tabular data would make the most sense given that it's easy to find election results stored in a CSV format. How exactly would one go about uploading the data?And do I need to do anything further to get permission? Zubin12 (talk) 05:32, 19 December 2018 (UTC)
I've never used .tab on Commons before but agree it is probably best option because it's universal available to all wiki languages, and it's easy to import data compared to Wikidata or Lua tables. Would encourage developing a bot to keep the data up to date, automatically, otherwise it will depend on manual updates, is error prone. A bot will require a bot flag (bot permission). See Commons:Commons:Bots. -- GreenC 19:03, 19 December 2018 (UTC)

spectator.co.uk

There are about 1000 mainspace links to Spectator, most are broken. They changed URL schemes without redirects. The pages still exist at a new URL. Example:

There's no obvious way to program this, but posting if anyone has ideas. -- GreenC 06:27, 7 November 2018 (UTC)

I actually does not have much knowledge about the wikipedia bots. When I checked two to three links, the things that needs to be done from a reader's point of view is:

1)Identify the link which is identified as broken.
2)Remove the words "-.thtml" from the last portion of the link.
3)Add the month number and year number before the last section of url which needs to be separated by commas. This year and month number is the number on which the article appeared. If the month is only one digit, you need to add a zero before the month number.Adithyak1997 (talk) 10:40, 7 November 2018 (UTC)

The idea is to automate the conversion since it's 1000+ links. A bot wouldn't know which month. In the second example it is "letters-201" vs "letters" thus "-210" is also an unknown. If there was a way to find the redirected URL, such as though archive.org or some other way. Or volunteers to manually fix them. -- GreenC 20:14, 7 November 2018 (UTC)
One could also just write an e-mail to spectator.co.uk with the old urls and kindly ask them to give a mapping to the new urls. Then a bot could replace those links. -- seth (talk) 11:13, 10 November 2018 (UTC)
@Lustiger seth:. Do you want to give it a try? Narrowed it down to 552 dead links (User:GreenC/data/spectator). I've tried asking these things before and never had success so maybe someone else would have better luck. If they provide a mapping, I'll make the changes. -- GreenC 17:42, 10 November 2018 (UTC)
E-mail with links to special:linksearch/http://www.spectator.co.uk, User:GreenC/data/spectator, and to this discussion sent. If I get an answer, where shall I place the list? -- seth (talk) 10:04, 11 November 2018 (UTC)
Thanks! In data/spectator -- GreenC 16:30, 11 November 2018 (UTC)
Hi!
2018-11-11 10:02: mail sent to spectator digitalhelp@... (probably this was the wrong address, because they only look after subscriptions).
2018-11-11 10:12: first (automatic) answer: "You will receive a reply from one of our customer service team members within 48hrs."
2018-11-13 01:48: second answer: "I am awaiting further information regarding your enquiry and I will contact you as soon as this information has been received."
2018-11-14 01:42: third answer: "We would request you to email editor@... for further information." (deleted e-mail address)
2018-11-14 19:54: second try (mailed to editor@...)
2018-11-14 19:54: forth answer: "I'm afraid that due to the number of them received at this address it’s not possible to send a personal response to each one. To help your email find its way to the right home and to answer some questions:
  • If you are writing a letter for publication, please send it to letters@....
  • Please send article pitches and submissions to pitches@....
  • If you are having problems with your subscription, please email customerhelp@... [...]. For problems with the website, our digital paywall, our apps or the Kindle edition of the magazine, our FAQ page is here – and if that doesn’t answer your question please email digital@....
  • If the matter is urgent, please call our switchboard on 020 [...]."
2018-11-14 20:06: third try (mailed to digital@...)
iow: this may take some time. -- seth (talk) 20:10, 14 November 2018 (UTC)
Well, I don't think, I'll get an answer. :-( -- seth (talk) 23:37, 25 December 2018 (UTC)

Unreferenced articles

Could a bot please identify articles that are not currently tagged as unreferenced but seem not to have references? Thanks for looking at this, Boleyn (talk) 19:12, 10 November 2018 (UTC)

Why do I get the feeling that this might be WP:CONTEXTBOT? --Redrose64 🌹 (talk) 23:42, 11 November 2018 (UTC)
Hi, Redrose64, I'm not sure I was clear enough, by identify the articles I meant generate a list of articles, similar to Wikipedia:Mistagged unreferenced articles cleanup. Thanks, Boleyn (talk) 18:18, 12 November 2018 (UTC)
Boleyn, I like this idea. Will take it up. If/when something is ready I'll post at Wikipedia talk:WikiProject Unreferenced articles or if any questions arise. -- GreenC 05:06, 2 December 2018 (UTC)
Bot now in beta. Initial test results. Followup at Wikipedia talk:WikiProject Unreferenced articles. -- GreenC 01:22, 17 December 2018 (UTC)

BRFA filed -- GreenC 04:07, 31 December 2018 (UTC)

College football schedule conversions

I'd like have a bot update the templates used to render college football schedule tables. Three old templates—Template:CFB Schedule Start, Template:CFB Schedule Entry, and Template:CFB Schedule End—which were developed in 2006, are to be replaced with two newer, module-based templates—Template:CFB schedule and Template:CFB schedule entry. The old templates remain on nearly 12,000 articles. The new templates were coded by User:Frietjes, who has also developed a process for converting the old templates to the new:

add {{subst:#invoke:CFB schedule/convert|subst| at the top of the table, before the {{CFB Schedule Start}} and }} to the bottom after the {{CFB Schedule End}}.

The development and use of these new templates has been much discussed in the last year at Wikipedia talk:WikiProject College football and has a consensus of support.

Thanks, Jweiss11 (talk) 00:32, 8 November 2018 (UTC)

We also need to add the optional "Source" column that was approved as part of the new template. Cbl62 (talk) 03:13, 19 November 2018 (UTC)
@Cbl62: This is irrelevant to the conversion process at stake here. Template:CFB schedule entry services the source column, although the template documentation does not reflect that. Jweiss11 (talk) 04:52, 19 November 2018 (UTC)
While we're doing the conversion, it makes sense to get everything working properly. Others have noted that there is a glitch in using the "Source" column in the named parameters version of the template. Whether the glitch in documentation or in core functionality, it should be remedied so that the "Source" column can be added. Cbl62 (talk) 10:38, 19 November 2018 (UTC)
@Cbl62: What is the glitch with the "Source" column in the named parameters version of the template? You can describe it or show an example? Jweiss11 (talk) 14:38, 19 November 2018 (UTC)
The "glitch" is that people have expressed a concern that they have difficulty adding a "Source" column to the new named parameters chart. See discussion here: Wikipedia talk:WikiProject College football#2018 Nebraska score links. I have yet to see a version of the new named parameters chart that includes a source column. Can you show an example where it has been done? And is there a reason it is not included in the template documentation? (By way of contrast, in the unnamed parameters version, the Source column is included in the template documentation as an optional add-on, see, e.g., 1921 New Mexico Lobos football team.) Cbl62 (talk) 15:00, 19 November 2018 (UTC) See also 2018 Michigan Wolverines football team where sources are presented in each line of the template but no "Source" column has been generated. Cbl62 (talk) 15:06, 19 November 2018 (UTC)
This is not a glitch. The is simply user habit. The person to ask about the template documentation is User:Frietjes, as she is the editor who wrote it. The inline citations at 2018 Michigan Wolverines football team could be easily moved to the source column if one so wanted. Jweiss11 (talk) 16:00, 19 November 2018 (UTC)
the source parameter is demonstrated in example 3. feel free to add this to the blank example at the top of the documentation, along with other missing parameters, like overtime, etc. Frietjes (talk) 16:11, 19 November 2018 (UTC)
Excellent. Thanks, Frietjes! Cbl62 (talk) 22:22, 19 November 2018 (UTC)

@BU Rob13: would you be available to take on this bot request? Thanks, Jweiss11 (talk) 03:16, 4 December 2018 (UTC)

@Jweiss11: Sorry, but not really. I'm about to take an extended break from Wikipedia, most likely. ~ Rob13Talk 04:00, 4 December 2018 (UTC)
I'm only skimming this but it might be a good candidate for PrimeBOT's Task 30. Primefac (talk) 15:21, 4 December 2018 (UTC)
@Primefac: Could you actually leave this for now? I've been trying to get a technically-minded friend interested in Wikipedia for a bit, and this may interest her. I'm reaching out to see if she'd be interested in jumping in and creating a bot. ~ Rob13Talk 23:44, 7 December 2018 (UTC)
Sure thing. Primefac (talk) 16:23, 9 December 2018 (UTC)
@BU Rob13: any word from your friend about whether she is interested in taking this on? Thanks and happy holidays, Jweiss11 (talk) 21:15, 25 December 2018 (UTC)
Sadly, a non-starter. She took a look around and ultimately decided she wasn't interested in the culture after seeing a talk page discussion gone bad. Which is fair, to be honest. Primefac, all yours. Thanks for holding off. ~ Rob13Talk 02:20, 26 December 2018 (UTC)
@Primefac: are you still available to take this on? Jweiss11 (talk) 04:32, 8 January 2019 (UTC)
Sorry for the late reply; yes, I should be able to do this. Primefac (talk) 11:09, 23 January 2019 (UTC)

 Working. Primefac (talk) 15:30, 27 January 2019 (UTC)

 Done. There are still some user-space transclusions of {{CFB Schedule Start}} et al, but barring any accidental miscues from GIGO issues it should be finished. Primefac (talk) 04:15, 28 January 2019 (UTC)


Check 5.7 million mainspace talk pages for sections that would benefit from a {{reflist-talk}}.

Example edit.

Scope: for each talk page, extract each 2-level section. For each section, check for existence of reference tags ie. <ref></ref>. If exist, check for existence of {{reflist-talk}} or <references/>. If none exist, add {{reflist-talk}} at end of section (optionally in a 3rd-level subsection called "References").

-- GreenC 16:27, 1 January 2019 (UTC)

A more determinate method is search the HTML for <ol class="references"> - this will always exist if there is <ref></ref> somewhere in the page, regardless of existence of {{reflist-talk}} or <references/> and it will account for things like <!-- <ref></ref> --> -- GreenC 16:36, 1 January 2019 (UTC)
@GreenC: Not true: it's also present in pages with anautogenerated reflist, such as the previous version. --Redrose64 🌹 (talk) 20:08, 1 January 2019 (UTC)
Yeah I know. It will always exist if there is a ref, regardless of the existence of <references/> or its equiv. -- GreenC 20:16, 1 January 2019 (UTC)

I ran a script. In 2000 Talk pages it found 11 cases:

Extrapolated it would be about 29,000 pages are like this. -- GreenC 19:43, 1 January 2019 (UTC)

BRFA filed -- GreenC 20:02, 1 January 2019 (UTC)

Y Done -- GreenC 07:19, 11 February 2019 (UTC)

WikiProject Soil Tagging

The request is to have {{WikiProject Soil}} added to the article talk pages in 39 categories. Project notification posted. Much appreciated:

requested: -- Paleorthid (talk) 23:10, 6 January 2019 (UTC)

@Paleorthid:  Doing... --DannyS712 (talk) 02:31, 8 January 2019 (UTC)
@Paleorthid: See BRFA filed --DannyS712 (talk) 02:35, 8 January 2019 (UTC) (change to template 01:23, 10 January 2019 (UTC))
@Paleorthid: I ~think~ I tagged them all (the bot was approved a little bit ago) --DannyS712 (talk) 06:25, 28 February 2019 (UTC)
Thank you, that worked very well. Paleorthid (talk) 19:00, 1 March 2019 (UTC)

)

Redirects to Star Sports

In coming days, I'm going to redirect Star Sports to Fox Sports (Southeast Asian TV network). But some redirects to Star Sports need to be retargetted in advance.

Redirect the following to Fox Sports (Southeast Asian TV network)
Redirect the following to Star Sports (Indian TV network)

(Correction: STAR Sports HD3, Star Sports HD3, STAR Sports HD4 and Star Sports HD4 did exist. JSH-alive/talk/cont/mail 14:12, 21 January 2019 (UTC))

I don't know what to do with STAR Sports Network and Star Sports Network. Is it the name for Indian channels or Southeast Asian channels? JSH-alive/talk/cont/mail 09:32, 20 January 2019 (UTC)

Y Done per User_talk:Xqt#Requesting_mass_redirect_fix  @xqt 13:50, 1 February 2019 (UTC)

Shadows Commons

This is a relatively simple query: https://quarry.wmflabs.org/query/18894

The images listed in that query ideally should be tagged with {{Shadows Commons}} (unless already tagged as CSD F8)

As this is a repeatable, and felt to be uncontroversial task, It would be better to let a bot do it, freeing up contributors for more complex tasks that require human skills rather than simple tagging clicks. Thanks

Given the query size, the bot would not need to be run continuously, but once a week should prove to be more than adequate.


ShakespeareFan00 (talk) 10:58, 6 February 2019 (UTC)

BRFA filed -- GreenC 15:30, 6 February 2019 (UTC)

Y Done -- GreenC 22:40, 23 February 2019 (UTC)

listing for Speedy Renaming all subcategories of Category:GTK+ to plain GTK

per Wikipedia:Categories_for_discussion/Speedy#Current_requests (Consistency with main article's name per official renaming)

please list all subcategories of Category:GTK+ with "GTK+" to plain GTK, their number is high and I can't do it manually, thanks. -- Editor-1 (talk) 08:19, 10 February 2019 (UTC)

@Editor-1: I can do it (with AWB) - but what specifically are you asking for? I list of the categories? --DannyS712 (talk) 08:25, 10 February 2019 (UTC)
just list of all categories that have "GTK+" and same list without plus mark (plain GTK), see mentioned link and related discussion, thanks. Editor-1 (talk) 08:28, 10 February 2019 (UTC)
@Editor-1: I made a list of all of the subcategories (below) --DannyS712 (talk) 08:30, 10 February 2019 (UTC)
now there is need to make this list into below format:

* [[:Category:old name with plus]] to [[:Category:same name without plus]] – per official renaming (request by [[User:Editor-1]])

to can include into Wikipedia:Categories_for_discussion/Speedy#Current_requests

thanks. -- Editor-1 (talk) 08:46, 10 February 2019 (UTC)

@Editor-1:  Done --DannyS712 (talk) 09:02, 10 February 2019 (UTC)
@DannyS712: thank you very much. -- Editor-1 (talk) 09:17, 10 February 2019 (UTC)

list

Extended content

Tagging sub-categories of Category:English-language singers

Please tag all sub-cats of Category:English-language singers (except Uganda) with

{{subst:cfr-speedy|English-language singers from ...}}

i.e. the nomination is to change the word "of" to "from".

Ideally, each category's country name should replace "...", but the ellipsis would be sufficient.

I will then list them at WP:CFDS myself. – Fayenatic London 23:03, 13 February 2019 (UTC)

@Fayenatic london: I'll do this one again manually, but I'm going to submit a BRFA soon, so feel free to message me for these in the future --DannyS712 (talk) 00:38, 14 February 2019 (UTC)
@Fayenatic london: Can you list it first, so I can link to it in the edit summary? --DannyS712 test (talk) 00:41, 14 February 2019 (UTC)
@DannyS712 test: Thank you, I have listed them in a separate section at Wikipedia:Categories_for_discussion/Speedy#Current_requests. – Fayenatic London 13:53, 14 February 2019 (UTC)
@Fayenatic london:  Done --DannyS712 (talk) 16:11, 14 February 2019 (UTC)

ARKive

The ARKive project has ended and its website has been replaced with a single page noting that act. Links to pages on arkive.org need to be replaced with archive.org equivalents; and citations need |archive-url= and |archive-date= attributes. I've already updated {{ARKive}}. Can someone oblige, please?

Links like the one at the foot of Bitis schneideri could usefully be replaced using {{ARKive}}. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:23, 16 February 2019 (UTC)

I have set the domain to dead on IABot and submitted a task to update the affected pages. Dat GuyTalkContribs 14:41, 16 February 2019 (UTC)
Hey Dat Guy I did the same thing and our queues were started about 15 seconds apart. I just killed my job. -- GreenC 14:49, 16 February 2019 (UTC)

My new bot request

Hello! I would like to request to operate a bot! My idea for a bot is a bot that can revert reference blanking. As my job as a recent changes patroller, I see many people blanking the reference. I know that CluBot reverts vandalism, but usually, CluBot does not revert the reference blanking. Let me know what you think!    Shalvey    17:10, 19 February 2019 (UTC)

How would it tell the difference between vandalism and legitimate deletion. User:ClueBot NG is a sophisticated bot built by a team of programmers. Maybe ask if they can incorporate the idea. Give them example diffs of the types of edits. -- GreenC 17:35, 19 February 2019 (UTC)
Requester indef blocked as WP:NOTHERE --DannyS712 (talk) 02:43, 2 March 2019 (UTC)

A Bot that would see if references go to a site

Hello, I would like to know if it is possible that you guys could create a bot that would check references in articles, and see if they are actually websites, not just a random URL that isn’t even existent. What I mean is that, when you type in a website, you have a blue outline, which then forwards you to the cite. What I’m seeing is URL’s that aren’t highlighted in blue, but just URLs. The bot could be ran by me, but I don’t know how to code a bot. Thanks!    Shalvey    18:49, 19 February 2019 (UTC)


Let me know what you think! — Preceding unsigned comment added by Shalvey (talkcontribs) 19:13, 19 February 2019 (UTC)

Requester indef blocked as WP:NOTHERE. IAbot checks if links result in actual websites, and tags them if they are dead links (not to actual websites currently), so this is also unneeded. --DannyS712 (talk) 02:44, 2 March 2019 (UTC)

Archive Bot

A bot that will, in certain situations, switch links to web.archive.org. Sun Sunris (talk) 01:31, 23 February 2019 (UTC)

For any page: History tab -> External tools -> Fix dead links .. this will run Internet Archive Bot (IABot) on that page. -- GreenC 02:11, 23 February 2019 (UTC

Section sizes

Please can someone add {{Section sizes}} to the talk pages of ~6300 articles that are longer than 150,000 bytes (per Special:LongPages), like in this edit?

The location is not critical, but I would suggest giving preference to putting it immediately after the last Wikiproject template, whee possible. Omit pages that already have the template. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:59, 29 December 2018 (UTC)

A reasonable request, but I think it might need some sort of consensus to implement. Is there a WikiProject interested in using this (very-recently-created) template in order to improve Wikipedia? Primefac (talk) 15:13, 30 December 2018 (UTC)
Wikipedia:Village_pump_(technical)#Analysing_long_articles (started by Andy). There was another thread on long articles recently but it must be archived as I can't find it, it was a call to arms on how to deal with breaking them up. -- GreenC 15:23, 30 December 2018 (UTC)
Cool. Primefac (talk) 15:42, 30 December 2018 (UTC)
Created a Village Pump (proposal) at Primefac's request for more discussion. -- GreenC 19:26, 3 January 2019 (UTC)

One Shot: Removing Commons eligibility assessment from images reviewed by ShakespeareFan00

This is a request for a ONE-Shot bot run to remove -

{{Copy to Commons|human=ShakespeareFan00}}

From the 8000 or so images currently tagged with it. This is requested because both Commons and Wikipedia policy has changed in significant ways since the vast bulk of the images were tagged. It would be tedious to remove the tag individually. ShakespeareFan00 (talk) 17:53, 28 February 2019 (UTC)

@ShakespeareFan00: sure, filing brfa now --DannyS712 (talk) 18:54, 28 February 2019 (UTC)
@ShakespeareFan00: BRFA filed --DannyS712 (talk) 19:00, 28 February 2019 (UTC)

Change of Category:Missing U-boats

I'd like to request moving of pages from Category:Missing U-boats to Category:Missing U-boats of World War II - it is grouping only WWII boats at the moment and is in such parent category (WWI boats were excluded to Category:Missing U-boats of World War I). Pibwl ←« 14:39, 6 March 2019 (UTC)

@Pibwl: To move a category, please see WP:CFD --DannyS712 (talk) 15:16, 6 March 2019 (UTC)
Not necessarily - the category Missing U-boats might stay, to cover both sub-categories. Pibwl ←« 17:44, 6 March 2019 (UTC)
@Pibwl:  Done --DannyS712 (talk) 21:07, 6 March 2019 (UTC)

Replicate Usurp KadaneBot

Moved from WP:BON

Hi all, Kadane very helpfully created KadaneBot for us over at Wikipedia Peer Review - it sends out automated reminders based on topic areas of interest for unanswered peer reviews. Unfortunately, Kadane's been inactive almost since creation (September 2018), and hasn't responded to my request [1]. Would anyone be so kind as to usurp this bot so we can continue to use it? --Tom (LT) (talk) 07:32, 22 February 2019 (UTC)

ADDIT. As Xaosflux pointed out, there isn't really a 'usurp' process, however I think this is probably the easiest title to describe what I am requesting.--Tom (LT) (talk) 07:32, 22 February 2019 (UTC)
Kadane is a student programmer[2] who signed up August 1 to write bots - wrote this one - then left the project evidently on September 30. Looks like they were running the bot from a private computer as nothing on Toolforge, and they didn't publish the source code. -- GreenC 03:05, 23 February 2019 (UTC)
Sigh. Have changed title to reflect this. Thanks for the update @GreenC. --Tom (LT) (talk) 03:55, 3 March 2019 (UTC)

Am seeking a bot that can periodically notify volunteers at WP:PRV about new or unanswered reviews, would be very useful and (I hope) increase peer review activity levels. --Tom (LT) (talk) 03:55, 3 March 2019 (UTC)

@Tom (LT): question: given the volunteer section Wikipedia:Peer_review/volunteers#Applied_sciences_and_technology, how would the bot know it is for reviews listed at Wikipedia:Peer_review/List_of_unanswered_reviews#Engineering_and_technology as the sections have different titles. Similarly, given the example PRV {{PRV|Kadane|Computer Science|contact=monthly}} how would a bot know which reviews are for "computer science"? Thanks for the clarification. -- GreenC 22:58, 5 March 2019 (UTC)
@Tom (LT) and GreenC: - Source code for the bot is now linked on User:KadaneBot. My email wasn't forwarding sorry for the delayed reply. I can set this up on a cron job and continue to support it, or I can hand it off. Let me know what you would like to do. Kadane (talk) 01:26, 6 March 2019 (UTC)
Many thanks Kadane, and great to hear from you. Not sure what 'cron job' means but I think the best solution would be for an automated recurrent bot; if not happy for you to continue to run now that your amazing source code is freely available :). --Tom (LT) (talk) 09:55, 6 March 2019 (UTC)
Kadane, if you want help getting setup on Toolforge with a cron let me know. Toolforge is the community server farm where tools run on a grid engine (you may already know this), basically a unix shell account with ssh access hosted in the main Wikipedia data center so traffic is over ethernet to Wikipedia servers. -- GreenC 13:59, 6 March 2019 (UTC)
@GreenC: I would actually very much appreciate your help getting setup on Toolforge. For some reason I thought it was an admin only thing. I am currently running everything on a server I use for school. Kadane (talk) 16:38, 6 March 2019 (UTC)
@Tom (LT): A cron job would make it automatic. I will get this implemented. Again sorry for the delay/inconvenience. Kadane (talk) 16:39, 6 March 2019 (UTC)

Y Done, Kadane migrated to Toolforge and running via cron (ie. run automatically at set times). -- GreenC 16:41, 10 March 2019 (UTC)

Thanks again Kadane! --Tom (LT) (talk) 21:13, 10 March 2019 (UTC)

Moving Reference Metadata Out of Two Infoboxes

The Medical Translation Task Force faces an issue with respect to Content Translation. Basically the tool loses references when the metadata exists within template:infobox medical condition (new) and template:drugbox. The issue is described here and the task is supposedly not easily fixable and thus will not be fixed anytime soon.[3]

As a work around I am proposing a bot that moves the metadata for references from these two infoboxes to the lead or body of the article in question. Will be done for these ~1200 articles.Category:RTT

An example of what such an edit will look like is this.[4]

Doc James (talk · contribs · email) 19:14, 30 January 2019 (UTC)

@Doc James: In the example given a named reference gets moved out of the infobox and into a call of that named reference in the main body. What happens in the case that there is not such an obvious place to transfer the citation? In that case would the reference just hang generally in the article, would it go into a special subsection for odd references, would the bot skip that transfer, or is there some other plan? Blue Rasberry (talk) 20:23, 30 January 2019 (UTC)
User:Bluerasberry the bot would do nothing. The problem with content translation only occurs when a named reference occurs within the infobox and than is used as "<ref name=X/>" outside the infobox. If it is only used within the infobox there is no problem. Doc James (talk · contribs · email) 20:29, 30 January 2019 (UTC)
@Doc James: I see. The translation project is concerned with the leads of articles, so when citations critical to the leads is inaccessible in the infobox, then the translation workflow has difficulties applying the full citation in the translated text.
I guess the controversy here could be whether first uses of a citation should be in the body of text rather than the infobox. I say yes - infoboxes increasingly are becoming a space for semi-automatic engagement and any template is challenging for new editors to manipulate anyway. I prefer having a more consistent practice of keeping citations in the body of the text.
Support This proposal does not hurt anything, makes a change which most users would find arbitrary, but which has a big impact for the workflow of the translation team. The bot would do a one-time run of 1200 articles and then perhaps occasional maintenance, which I am guessing could be 1-2 times yearly in the next few years. Do it. Blue Rasberry (talk) 20:35, 30 January 2019 (UTC)
Thanks User:Bluerasberry. Only need the one run for the translation efforts. I am happy to manually make sure metadata is in the appropriate spot for all new articles prepared for translation. Doc James (talk · contribs · email) 20:39, 30 January 2019 (UTC)
Support Infoboxes should be summarising information available in the main article text, so most of the time there will be a choice between citing the full reference in the body text or in the infobox. We ought to prefer having the full citation in article text, because it makes it easier to re-use snippets of text from one article in another, related one. There is rarely any corresponding need to copy infoboxes from one article to another. If this also improves the functionality of the Content Translation tool, then that is a real bonus. --RexxS (talk) 11:43, 31 January 2019 (UTC)
I'm willing to create and run a bot to accomplish the task as mentioned by Doc James. --Fz-29 (talk) 20:20, 1 February 2019 (UTC)
Anything more we need before User:Fz-29 builds this? Doc James (talk · contribs · email) 01:03, 2 February 2019 (UTC)

Y Done (Wikipedia:Bots/Requests for approval/Fz29bot) -- GreenC 16:40, 10 March 2019 (UTC)

Bot to reorganize the French Communes articles in Category:Articles needing translation from French Wikipedia

The task of the bot would be to identify all articles in Category:Articles needing translation from French Wikipedia who have a French Commune infobox, then add the |topic=geo parameter to their Expand French template, to categorize them in Category:Geography articles needing translation from French Wikipedia. Its second task would be more complex, as it should identify the Category:Communes of departement name category, and add the fitting expansion category of Category:departement name communes articles needing translation from French Wikipedia

These tasks should apply to anything between four to seven thousand articles.

Knowing nothing about how bots function, there may be an easier way to do such a large scale category move that I am not seeing.Sadenar40000 (talk) 18:47, 28 February 2019 (UTC)

@Sadenar40000: is it okay to do just the first part to start with - that seems really easy. The second task is more complex --DannyS712 (talk) 18:51, 28 February 2019 (UTC)
@DannyS712: It is okay indeed, it would even be better, as there may be pages inside the geography topic that are not in their appropriate communes categories too, and would limit the categories the bots should search for communes to categorize to the category Geography articles needing translation from French Wikipedia. Sadenar40000 (talk) 10:01, 1 March 2019 (UTC)
@Sadenar40000: BRFA filed --DannyS712 (talk) 10:14, 1 March 2019 (UTC)

MOSDATE bot

Hello,

I would like to suggest a bot that fixes dates in Category:Use mdy dates and Category:Use dmy dates.

RhinosF1(chat)(status)(contribs) 18:09, 15 February 2019 (UTC)

I'd be happy to run the task (preferably python 2.7 (or 3)) or in any automated editor that works with ChromeOS but I can't guarantee consistency in it running. RhinosF1(chat)(status)(contribs) 18:18, 15 February 2019 (UTC)
This was recently discussed at Wikipedia:Village pump (proposals)/Archive 156#"Datebot" (limited scope). Glancing through the !votes, it looks like it's not particularly straightforward and may not have enough support for a bot to be doing it. If someone were to want to do this, they should probably restart that discussion and try for much more input. And be a bit clearer about exactly which dates would be touched (only in citation templates?). Anomie 18:26, 15 February 2019 (UTC)
I wasn't aware of the discussion, As I've said I'd be happy to do a automated one-time run on something like AWB if that would be better. I'd probably suggest only touching articles with the tag that haven't been updated in 12 months and run once every year or so. I'd personally go with citation templates only for a bot. If it was AWB generating a list to approve then pushing those changes in batches then any date on an article with the tag. I'm on IRC often if anyone would like to help develop and wants to discuss via PM (you must register your nick first). RhinosF1(chat)(status)(contribs) 18:34, 15 February 2019 (UTC)
I agree that this issue is not straightforward, and is not conducive to a bot operation. For example, any bot that changes access-date dates from BIGENDIAN to either 'mdy' or 'dmy' when the article has established use of BIGENDIAN for the access-date dates would be in violation of WP:DATERET and WP:CITESTYLE/WP:CITEVAR – but how is a bot supposed to figure this out? --IJBall (contribstalk) 21:45, 15 February 2019 (UTC)
Declined Not a good task for a bot. per the comments above. Primefac (talk) 15:50, 17 February 2019 (UTC)

Auto-classifying bot

There is a huge backlog within most Wikipedia Projects of unclassified articles. I've been recently assessing a number of these for the Politics Project, and have noticed a few patterns that I believe could be automated to heavily reduce this backlog.

  • At the moment I believe there is a bot that goes around and updates quality tags if another tag on the project has had it's quality increased. However, it appears to only do this if there is currently a quality tag for the project. I believe this can and should be updated to do this regardless of whether there is such a tag for a project. If, for instance, it finds a page like this Talk:1842 New York gubernatorial election it should update the Politics and Election & Referendum templates for that article with the stub class tag, in the process removing that article from the list of articles that the Politics Project needs to work on - and this is not an isolated occurrence. I don't have the numbers, but I have seen this sort of thing numerous times.
  • It is also sometimes possible to discern the importance of a article with excellent accuracy from the assessment of surrounding projects. For instance, in this article Talk:1844 United States presidential election in New York it would be reasonable to take the assessment from the US/Government/PresElections taskforce, if such an assessment is low, and apply it to the politics one, because the Politics project is never going to consider an article that said taskforce considers low importance any higher than that. As such, a bot that projects could instruct to duplicate the tag of certain other taskforces or sub-taskforces, up to a certain 'level', to their own taskforce is what I am proposing. Alongside the above proposal, this should drastically reduce the backlog across numerous projects that are willing to set up the instruction page for it.

And of course, if we can heavily reduce the backlog like this, we will make attempting the remaining tasks that must be classified by hand less daunting, and thus more likely to be done. It is true that the second part of this proposal will sometimes result in incorrect classification, but the criteria will be up for each taskforce to determine and so I don't believe that risk should prevent this bot being created - and even if they are incorrectly assessed, a few incorrect assessments are better than numerous unassessed articles.

If no one is interested in taking this up then I do intend to get around to it at some point - unless someone is able to explain why it is stupid/unnecessary, though I think the first part of this proposal would be better as a modification to the existing tag-update bot.

-- NoCOBOL (talk) 07:50, 25 January 2019 (UTC)

I've forgotten just how many times I've had to explain this, but the WikiProject importance ratings are intentionally different. That is because they indicate how important the page is to that specific WikiProject or task force. --Redrose64 🌹 (talk) 11:59, 25 January 2019 (UTC)
I realize that. However, that doesn't mean they can't sometimes be derived from each other. I'm not suggesting that we set up a bot to duplicate all rankings, I'm suggesting we set up a bot that allows wikiprojects to set conditionals by which their importance ranking can sometimes be derived - I think this misunderstanding is my fault, I didn't explain things. For instance, per my under-explained example above, the Politics Project could seta conditional where:
  • If the bot finds an article that is tagged as part of the politics project
  • And If that tag does not have an assessed importance
  • And If that article is also tagged by the Presidential Elections subproject of the Governence subproject of the United States Project
  • And If that article is assessed as low importance
  • Then assess the Politics Project Tag for that article as Low Importance
The idea is that sometimes importance will be the same; in this case, I believe it's extremely unlikely that the Politics Project will find something important when the Presidential Elections subproject does not, and from this idea I wish to enable participating projects to take advantage of these patterns when they discern them, and in doing so reduce the extreme backlog that most projects have. -- NoCOBOL (talk) 18:35, 25 January 2019 (UTC)
I'm going to mark this as  Not done - the user has retired, and there is no clear support for this by wikiprojects --DannyS712 (talk) 05:34, 12 March 2019 (UTC)

Use of 'Mormon Church' and 'LDS Church' formally requested to be discontinued

Hello there! The Church of Jesus Christ of Latter-day Saints has formally requested that all references to the 'Mormon Church', 'LDS Church', and 'Mormonism' be discontinued by all users in all media outlets including Wikipedia. It would be astronomically easier for someone to make this change via AWB rather than manually search out and make every single change.

This request is complicated by two items: (1) no formal replacement for these 3 informal references has been provided by the Church of Jesus Christ of Latter-day Saints, which will affect how the AWB algorithm needs to work; (2) the name of a volume of this church's scripture is 'The Book of Mormon', and eliminating all references to the 'Mormon Church' or 'Mormonism' without editing all references to 'The Book of Mormon' may be difficult for AWB. — Preceding unsigned comment added by Dpammm (talkcontribs) 07:26, 10 March 2019 (UTC)

@Dpammm: unfortunately, this is counter to the current wikipedia guide on referring to the Church of Jesus Christ of Latter-day Saints. See this section for a full explanation. While I personally would be willing to take this on, it would require consensus to change the guidelines. There is current a discussion at Wikipedia talk:Manual of Style/Latter Day Saints#changes based on recent style request from LDS Church? about changing this guideline, and I suggest that you contribute there. If the guideline does change, feel free to ping me, and I'll take a look at making an AWB regex. --DannyS712 (talk) 07:38, 10 March 2019 (UTC)
(edit conflict) This isn't appropriate for a bot request at this stage. While we obviously take the views of the subject into account, we don't automatically defer to them; instead, we go with what reliable sources, particularly those published after the event, call them (If the reliable sources written after the change is announced routinely use the new name is the criterion for changing how we refer to someone/something). There are quite a few examples of our continuing to use a name over the subject's objections because it's continued to be the name commonly used in the sources; North Korea is an obvious example that springs to mind. If you want Wikipedia to deprecate the use of these terms, you need to provide evidence that reliable sources are no longer using the terms "Mormonism", "LDS Church" etc, and then start a WP:RFC to deprecate the terms; only then is it time to start making bot requests, or even to start manually editing the articles. ‑ Iridescent 07:40, 10 March 2019 (UTC)

The Mormons have had this idea before but then they continued using Mormon themselves. I seriously doubt RS are going to follow this requested change. Legacypac (talk) 07:57, 10 March 2019 (UTC)

I agree, especially given that their preferred name ("Church of Jesus Christ") is obviously never going to catch on outside the LDS bubble since it could refer equally well to any Christian denomination. However this is a discussion better suited for Wikipedia talk:Manual of Style/Latter Day Saints#changes based on recent style request from LDS Church? rather than here. ‑ Iridescent 08:06, 10 March 2019 (UTC)

Isn't that pretty obnoxious though? The Church of Jesus Christ of Latter-day Saints has made a 100% formal shift in use of its name. It has openly and formally disassociated itself from the terms "Mormon Church", "LDS Church", and "Mormonism". Of course it could take time for common usage to change, but how is that going to happen if media outlets requested to make the change refuse to do so? I'm not an expert at this but it's fairly common sense to go along with the intended request despite its occurring in phases even as the 5 March 2019 re-affirmation news release and first presidency letter states and changes have already been made (www.lds.org to www.churchofjesuschrist.org, etc. as this article describes, including "mormonnewsroom" to follow suit shortly): https://www.mormonnewsroom.org/article/church-name-alignment --Dpammma (talk) 08:41, 10 March 2019 (UTC)

This was also just posted at the other discussion, although I think that discussion page is dead as it's the only comment since January https://twitter.com/APStylebook/status/1104071713476755457. How does anything get progressed to a decision one way other the other, especially on issues where some editors are obviously bound and determined to not respect this church's request despite adoption of these changes by the largest mainstream media outlet? --Dpammma (talk) 08:49, 10 March 2019 (UTC)

Wikipedia's only purpose is to summarise what reliable sources are saying; we don't make editorial calls of this kind. To take the same example I used earlier, the DPRK considers "North Korea" deeply offensive, but we nonetheless use the term because it's what the majority of reliable sources call the country. For the third time, if you want consensus to change you need to make your arguments at Wikipedia talk:Manual of Style/Latter Day Saints#changes based on recent style request from LDS Church?; we're not going to run a bot to unilaterally overturn consensus, so the onus is on you to persuade people to change the consensus. ‑ Iridescent 09:02, 10 March 2019 (UTC)
And, even if it were reasonable to make this change in the many places requested, it certainly wouldn't be fully automatic per WP:CONTEXTBOT. --Izno (talk) 13:59, 10 March 2019 (UTC)
I think this is a reasonable  Not done. No bot operator will take this on without consensus. You have already been pointed to the correct location for the consensus-gathering discussion; please feel free to return with a plan which can actually be implemented (per CONTEXTBOT) when there is a consensus. --Izno (talk) 13:59, 10 March 2019 (UTC)

NZ heritage site lists

Hi guys, I'd like to create lists of NZ heritage sites. Lists would be very similar to those at German Wikipedia, see List of monuments in New Zealand. The database with the heritage sites is available here: http://www.heritage.org.nz/the-list You can search all sites in a specific region and export CSV.

I'm not technically skilled enough to program a bot that'd help me to do that. Is there anyone keen to help out? List of heritage sites is quite commons practice here, see eg. Listed buildings in Windermere, Cumbria (town). Regards, Podzemnik (talk) 11:34, 22 January 2019 (UTC)

Well, the CSV contains this header and first record:

RegisterNumber,Name,RegistrationType,RegistrationStatus,DateRegistered,Address,RegisteredLegalDescription,ExtentOfRegistration,LocalAuthorit
yName,NZAANumbers

660,1YA Radio Station Building (Former),Historic Place Category 1,Listed,1990-02-15,"74 Shortland Street, AUCKLAND","Pt Allots 10‐11 Sec 3 City of Auckland (CT NA67C/507), Pt Allot 12 Sec 3 City of Auckland (CT NA152/135), North Auckland Land District","Extent includes the land described as Pt Allots 10‐11 Sec 3 City of Auckland defined on DP 874 (CT NA67C/507), Pt Allot 12 Sec 3 City of Auckland (CT NA152/135), North Auckland Land District, and the building known as 1YA Radio Station Building (Former) thereon.",Auckland Council (Auckland City Council),[]

The problem will be mapping the "Name" field (eg. "1YA Radio Station Building (Former)") with the Wikipedia article name (Kenneth Myers Centre). There's no bot magic for that. -- GreenC 16:34, 22 January 2019 (UTC)

Turning a CSV into a table is not that difficult - it ca be done with a Word processor, like Word - import csv
  1. Add a "| " to start of row
  2. Change all "," to " || "
  3. Change "end of line" to "end of line" + "|-" + "end of line"
so 
text 1,text 2,text 3,text 4
becomes
| text 1 || text 2 || text 3 || text 4
|-
Then one just needs to add the top and bottom of the table.
However - note http://www.heritage.org.nz/terms-and-conditions - "None of the content of this website may be reproduced, copied, used, communicated to the public or transmitted without the express written permission of Heritage New Zealand, except for the purposes of private study, research, review or education, as provided for in the New Zealand Copyright Act 1994.". That could be an issue, as there is a lot of text in some columns. Ronhjones  (Talk) 18:06, 22 January 2019 (UTC)
Right, can't copy-paste content from the web. The challenge is determining which Wikipedia article corresponds to a given CSV record, so you can make a list of Wikipedia articles. -- GreenC 18:40, 22 January 2019 (UTC)

Alright, thanks for the inputs guys, I'll try to do it myself! Podzemnik (talk) 07:51, 25 January 2019 (UTC)

Given the discussion above, I'm marking this as  Not done since it won't be done by a bot --DannyS712 (talk) 05:40, 12 March 2019 (UTC)

I just found and fixed an article with two Wikipedia links to two articles that were nothing but redirects back to it, and apparently that's all they had ever been. [5] Can you make a bot to check all Wikipedia links that point to pages that are redirects, then checks to see if that redirect points back to the page its coming from, and then remove the brackets around it so it doesn't link there anymore? If the link has a | in it, then keep what's after that and ditch the rest. Dream Focus 16:29, 26 January 2019 (UTC)

Why is it a crime for an article to link to itself? See for example Promotion (chess)#Promotion to various pieces, where we find the parenthesis
(See [[Promotion (chess)#Promotion to rook or bishop|Underpromotion: Promotion to rook or bishop]] for examples ...
If this link were to be removed, people would need to find their own way to Promotion (chess)#Promotion to rook or bishop. --Redrose64 🌹 (talk) 16:58, 26 January 2019 (UTC)
Self-redirects are rarely a result of deliberate self-linking, especially without anchor links. --Izno (talk) 17:34, 26 January 2019 (UTC)

Redrose64 I meant a link to another article that then redirects back to the first article again. Dream Focus 18:18, 26 January 2019 (UTC)

You mean like a redirect with possibilities? We should definitely not delink those, there is always the possibility that the redirect gets turned into a full article: if this happens, the existing links will then point to the new article. --Redrose64 🌹 (talk) 19:30, 26 January 2019 (UTC)
Given the issues with redirects with possibilities, and bots discerning the context for a link, I'm going to mark this as  Not done. --DannyS712 (talk) 05:35, 12 March 2019 (UTC)

Tagging shill journal articles

Adverts pretending to be peer-reviewed papers are cited in thousands, possibly tens of thousands, of Wikipedia articles. Articles in paid supplements to journals are generally not independent sources. See this discussion for details.

Sometimes, the citation contains the abbreviation "Suppl.". In this case, the citation could be bot-tagged with {{Unreliable medical source|sure=no|reason=sponsored supplements generally unreliable per WP:SPONSORED and WP:MEDINDY|date=21 November 2024}}

The "sure=no" parameter will add a question mark to the tag, as, rarely, the supplement might actually be a valid source. I think these exceptions would probably be rare enough to manually mark for exclusion by the bot.

This would increase awareness of this problem among editors as well as encouraging editors to scrutinize the tagged sources. HLHJ (talk) 04:42, 27 January 2019 (UTC)

Unless a highly-sophisticated algorithm can be made, this will be denied per WP:CONTEXTBOT. There are zillions of reliable supplements (e.g. Astronomy & Astrophysics Supplement Series, Astrophysical Journal Supplement Series, Nuclear Physics B: Proceedings Supplements, Supplement to the London Gazette, The Times Higher Education Supplement, Raffles Bulletin of Zoology Supplement), so flagging something as problematic merely because it's from a supplement will not fly. Headbomb {t · c · p · b} 17:11, 27 January 2019 (UTC)

Marking as  Not done per WP:CONTEXTBOT Kadane (talk) 17:23, 15 March 2019 (UTC)

Climate Bot

I'm thinking of making a bot that creates and/or updates the {{Weather box}} template based on climate data from BOM for Australian articles - with the possibility on expanding to other countries.

High level pseudocode

  • find wiki page that is missing data or needs data updated (search for whether it contains an exisiting weather box template, or is a "well sized" city/locality article and does not contain one)
-> extension: if weather boxes are more than 10(?) years old update them
  • scrape data from BOM
  • format and post update

This seems well suited to automation, especially since with climate change many of these weather boxes should change over time (assuming the longevity of Wikipedia).

This would be my first bot - and I'm not familiar with the specifics of WikiBots (yet). Do you think this task is best suited to a bot or a tool? I assume most bots start off as tools? Can you recommend a next step here?

"well sized" - measured by either quality grading (C class or above?) or number of bytes (>4000?). — Preceding unsigned comment added by Spacepine (talkcontribs) 01:45, 11 March 2019 (UTC)

The weather data from BOM is Copyright Commonwealth of Australia. A systematic web scrape and re-hosting on Wikipedia may eventually raise some eyebrows, if done at volume and kept up to date. Or maybe not. But technically, content added to Wikipedia is supposed to be a free or open license. Ideally this data would include a citation back to BOM to meet WP:V requirements. Are there other free sources of weather data? -- GreenC 14:21, 11 March 2019 (UTC)
The template contains a citation field, so that's not a problem. I couldn't find any free open source data. Surely if that is a problem for a judicious bot or tool it raises questions about the use of weather boxes in general? --Spacepine (talk) 23:38, 11 March 2019 (UTC)
Bot scrappers are different they are more noticable and keep the data more up to date thus competing with the BOM website as a source for this information. You could ask BOM for permission and submit the email to OTRS for verification as part of the BRFA. I bring this up because it can cause yourself and others a lot of trouble later on if BOM detects the bot and requests it to stop and/or to remove their data from Wikipedia. -- GreenC 14:54, 12 March 2019 (UTC)
Nothing wrong with asking. You really think that a 10(?) yearly update of long term averaged month-by-month climate data would cause problems though? It's not like it'd be posting the weekly... Would it change things if it were a tool instead? --Spacepine (talk) 15:21, 12 March 2019 (UTC)
It could be no problem at all, but since the data is Copyright and owned by BOM which is technically against the rules (theirs and ours) something to consider before embarking on a difficult project like this one. I recall a great weather website, they pulled data from WeatherUnderground and reformatted it with a flash interface that became very popular over time, then WeatherUnderground stopped the feed as it was competing with them and the site had to close down. I think weather data providers don't mind individual and small use cases but something at scale could say something. It's worth asking BOM, they may even encourage it since there would be many citations back to their site. -- GreenC 16:53, 12 March 2019 (UTC)
Good to know. Cheers. Any practical tips for a first time bot writer? I'm thinking using Pywikibot - start with a scraper, expand to a tool, sorts out OTRS shit, then maybe do a bot expansion? Do you think there's a significant utility in ClimateBot a similar script? --Spacepine (talk) 02:03, 13 March 2019 (UTC)
@Spacepine: Pywikibot would be a good framework to use for a task like this. Feel free to drop by my talk page if you have any questions on getting started! Kadane (talk) 17:35, 16 March 2019 (UTC)

 Not done - OTRS ticket needs to be filed by BOM granting permission to use copyrighted materials before bot can scrape the website. Once this is complete please open a new request if you aren't scripting the bot yourself. Kadane (talk) 17:35, 16 March 2019 (UTC)

Copyrighted material isn't the problem, websites such as this by the Australian government is released under CC-BY 3.0. The problem is the 'circumvention of terms' section here. Dat GuyTalkContribs 00:14, 17 March 2019 (UTC)
Thanks for the clarification. Kadane (talk) 06:05, 17 March 2019 (UTC)

MortalOnline.com redirect to StarVault.se Suggestion

This will affect very few pages in WP; mostly those in any articles about the company or the games it makes.

As follows, links that begin http://www.mortalonline.com -if still live- are to be found at under https://www.starvault.se ; furthermore, about Mortal Online's official forums and its post URLs: http://www.mortalonline.com/forums/threads/<words-in-title>.<specificnumber> is no longer the current URL pattern; it is (at this time)

https://www.starvault.se/mortalforums/threads/<words-in-title>.<specificnumber>

I think a bot could fix any of the old-form URLs into ones that could work. Nlaylah (talk) 22:58, 12 March 2019 (UTC)

@Nlaylah: - Would you please show an example of this substitution taking place? Will you also please give an example of an old link and a new link? Thanks Kadane (talk) 19:06, 15 March 2019 (UTC)
You're very welcome, Kadane. On the wikipedia page for Mortal Online, for instance, there is a reference cited-- Mortal Online#cite_note-23 "Patch Notes 1.82.00.00 Sarducaa". 2015-05-17. Retrieved 2015-07-01.-- which links to http://www.mortalonline.com/forums/threads/patch-notes-1-82-00-00-sarducaa.116395/ . This particular URL redirects (at the moment) to https://starvault.se/mortalforums/threads/patch-notes-1-82-00-00-sarducaa.116395/ . Does this answer the request above? (I don't claim to know if this is even necessary.)
@Nlaylah: - Got it! are only 39 instances of mortalonline.com. This might be a better task for AWB. You can make a request at Wikipedia:AutoWikiBrowser/Tasks. It's also small enough to do manually since the link is only present on a handful of pages. I am going to say  Not done since the task isn't large enough for a bot. Kadane (talk) 17:26, 16 March 2019 (UTC)

Convert of rounding templates

Per this tfd, {{rnd}}, {{round}} and {{decimals}} are all to be merged. Would be great to get a bot to convert the necessary transclusions (more than 10,000). Please {{ping|zackmann08}} if you have any questions! --Zackmann (Talk to me/What I been doing) 16:51, 11 March 2019 (UTC)

@Zackmann08: I can do it, but I don't know the background of the templates or the tfd - can you post some examples of a before and after of converting rnd to round, and of converting decimals to round? Thanks, --DannyS712 (talk) 05:31, 12 March 2019 (UTC)
There are two ways to do the merge:
  • Method 1: Convert {{round}} and {{decimals}} to {{rnd}} in the wiki text - about 22,000 cases. Then redirect {{round}} and {{decimals}} to {{rnd}}. Examine all the arguments available for the present {{round}} template and determine how to translate those to the arguments available in the {{rnd}} template (same with {{decimals}}) then make the conversion in wikitext.
  • Method 2: modify {{rnd}} so that it seamlessly supports the current arguments available for {{round}} and {{decimals}} - if this is even possible or not requires some investigation of the {{rnd}} source as there might be argument name conflicts. If this is possible, it is a simple matter of redirecting {tlx|round}} and/or {{decimals}} to {{rnd}}.
{{rnd}} is the target merge template because it has 261,105 use cases (compared to 15k and 8k for the others) and it has the most options available. Once everything is merged into {{rnd}} then {{round}} can redirect to it and the template docs can be changed to reflect the new name {{round}} going forward. Wouldn't be required to rename 261,105 legacy instances of {{rnd}} to {{round}}. -- GreenC 17:07, 12 March 2019 (UTC)
@GreenC: A comparison of the use of each of the templates on the examples given in their documentations is available at User:DannyS712/sandbox2. It should be straightforward to develop regexes to convert from the others to rnd --DannyS712 (talk) 20:28, 12 March 2019 (UTC)
I have some time to do this task if its cool with y'all. Kadane (talk) 20:37, 12 March 2019 (UTC)
@Kadane: please!! --Zackmann (Talk to me/What I been doing) 20:37, 12 March 2019 (UTC)

Okay. Coding... Kadane (talk) 20:42, 12 March 2019 (UTC)

Yes go for it. With template merges there are sometimes edge-cases where the template contains something it shouldn't like a misspelled argument/key name or incorrect value type like text in a number field that can confuse the bot. -- GreenC 20:49, 12 March 2019 (UTC)
@GreenC, Kadane, and Zackmann08: Hang on just a second - I wrote the sandbox because I wanted to do this. I was going to do runs with AWB for each of the different cases where a change in syntax is needed... --DannyS712 (talk) 22:01, 12 March 2019 (UTC)
DannyS712, go for it my friend! I don't really care WHO does it. :-p --Zackmann (Talk to me/What I been doing) 22:02, 12 March 2019 (UTC)
@Zackmann08: Thanks, but I don't want to do redundant work with Kadane, so I'll wait for their response --DannyS712 (talk) 22:03, 12 March 2019 (UTC)

@DannyS712: Oops. I have already programmed it and am ready to file the BRFA. Here is the source. Didn't mean to step on toes. Let me know what you want to do. Kadane (talk) 22:16, 12 March 2019 (UTC)

@Kadane: In that case, why don't you do the initial bot run, and any edge cases where your bot can't cope I'll write a regex for and use awb. How does that sound? --DannyS712 (talk) 23:08, 12 March 2019 (UTC)
@DannyS712: Sounds good. I don't have the requirements to apply for AWB, so that will work perfectly. It should be small enough to do semi-automated. I'll file the BRFA now. Kadane (talk) 23:12, 12 March 2019 (UTC)

BRFA filed Kadane (talk) 23:20, 12 March 2019 (UTC)

@Kadane: and I've commented about how I think I can help --DannyS712 (talk) 23:22, 12 March 2019 (UTC)

I've withdrawn my BRFA. All of the edits were completed within the trial. The template is ready to be merged. @Zackmann08: Task is Y Done Kadane (talk) 08:05, 17 March 2019 (UTC)

Kadane, Each of the template still has a large number of transclusions... What do you mean it is done? Zackmann (Talk to me/What I been doing) 18:32, 17 March 2019 (UTC)
@Zackmann08: I objected to conversions that did nothing other than change the template name as cosmetic. I believe (unless Kadane or DannyS712) made a mistake in the BRFA), you can now resolve the TfD by redirecting Template:Round and Template:Decimals to Template:Rnd (and then deciding on the final template name). {{3x|p}}ery (talk) 20:34, 17 March 2019 (UTC)
Pppery, AH! Gotcha. I didn't realize that the 3 templates now performed the same. I agree that if a simple redirect can be achieved, there is no reason for a bot. Zackmann (Talk to me/What I been doing) 00:03, 18 March 2019 (UTC)
@Zackmann08: They don't; there was a bot run to convert the uses of round and decimals that needed changing, but that run ended up being so small it was done entirely as a trial. Regardless, this TfD is over and a redirect can happen. {{3x|p}}ery (talk) 00:05, 18 March 2019 (UTC)
Pppery, AH! Gotcha. Thanks for the explanation! Zackmann (Talk to me/What I been doing) 00:06, 18 March 2019 (UTC)
@Pppery: do you mind doing the redirects? I want to make sure they end up pointing the right template... Zackmann (Talk to me/What I been doing) 00:07, 18 March 2019 (UTC)
@Zackmann08: OK, I redirected the relevant templates. {{3x|p}}ery (talk) 00:22, 18 March 2019 (UTC)

Remind me bot

Hi, it would be wonderful if we had a bot that looked for uses of a template called {{remindme}} or something similar (with a time parameter, such as 12 hours, 1 year, etc. etc.) and duly dropped a message on your own talk page at the designated time with a link to the page on which you put the remindme tag. It would only send such reminders to the person who posted the edit containing the template in the first place. Kind of like the functionality of such bots on reddit, I guess. Fish+Karate 13:11, 20 November 2018 (UTC)

  • That looks like it should go through BRFA smoothly. There seems to be some use case. It looks simple enough, so I would volunteer to code it, but the only way I can imagine to make it work is by monitoring Special:RecentChanges (or the API equivalent) for additions of the template, and that looks extremely inefficient; beards grayer than mine might have a better idea. TigraanClick here to contact me 13:33, 20 November 2018 (UTC)
Monitor the backlinks (whatlinkshere) for the template, maintain a database of diffs to that backlinks list each time the bot runs via cron. New additions will show up. I wrote a ready-made tool Backlinks Watchlist. -- GreenC 14:20, 20 November 2018 (UTC)
Would it even need to maintain a database? Just go hourly (or some period) through a populated category and if it is time, notify and then change the template to {{remind me|notified = yes}} to disable the category (and also change the text of the template to something like "This user was reminded of this discussion on Fooember 24, 2078."). Galobtter (pingó mió) 14:35, 20 November 2018 (UTC)
Backlinks or category are pretty much the same from user and bot PoV, I think (maybe it is a different story on the servers though). Maybe a small advantage to cat, because it can be more easily reviewed by humans.
In either case the point of maintaining the database would be to limit the scans. If the template gets some traction, and a million user each places a thousand reminders asking for a reminder in 3018, scanning every still-active template every time could get inefficient (as the category is populated with lots of reminders that you need to scan every time). For a first version though, we do not care; if bad stuff happens, it will be easy enough to put a limit on templates left by users (either limit active templates per user, or how far in the future you can set reminders).
If none else comes around it, I will try to draft the specs this weekend. Fish and karate, please whip me if you see nothing next Monday. I would ask the bot to do it, but it does not exist yet. The trickiest part will probably be who can ask a reminder for whom (I would probably say that only User:X can ask for a notification to User:X, to avoid abuse of the tool, which then needs a bit of checking of who put the template on the page). TigraanClick here to contact me 17:36, 21 November 2018 (UTC)
You might be interested in m:Community Wishlist Survey 2019/Notifications/Article reminders. Anomie 03:21, 21 November 2018 (UTC)
That is interesting, I think the bot I'm envisioning is more general than that, you could place the template anywhere and it'll ping you to go back there after a set time has elapsed (potentially could also put a specific datetime). Tigraan I definitely think only user X could ask for a reminder for user X, otherwise it would be open to abuse. A throttle of no more than Y reminders per day (or Z open reminders overall) may also be a good idea. Fish+Karate 09:20, 22 November 2018 (UTC)

Basic spec, policy questions to be answered

OK, so the basic use is as follows:

User:Alice places a template (to be created, let's call it {{remind me}}) inside a thread of which they wish to be reminded. The user specifies the date/time at which the reminder should be given as an argument of the template (either as "on Monday 7th" or "in three days" - syntax to be discussed later). At the given date, a bot "notifies" Alice.

On a policy level, I see a few questions:

  1. What kind of notification?
  2. Can Alice ask for Bob to be notified?
  3. Should we rate limit (and if so how?)
  4. Where, if anywhere, should we get consensus for all that?

Depending on the choice for each of those, this will change the amount of technical work needed, but as far as I can tell, those questions entirely define the next steps (coding/testing/approval request etc.). Please discuss here if I missed something, but below to answer the questions. TigraanClick here to contact me 13:36, 25 November 2018 (UTC)

Discussion on the spec

I made a separate section for this because I am almost sure of the questions that need asking but less sure of the answers they should get. What follows is my $0.02 for each:

  1. The simplest would be a user talk page message or a ping from the page from where the notification originates, but maybe WP:ECHO can allow better stuff. A user talk message is easy to code (read: I know how to do it), but it might lead to some clutter.
  2. After thinking over it, that is not obviously a bad thing that it is technically feasible (we can certainly decide it is against policy to do it or restrict the conditions, the question is whether this should be technically impossible). Alice has to post something to cause the bot to annoy Bob, so it is fairly similar to pings, which none would call to be terminated because of the potential for abuse. On the other hand, surely it would be OK and have some use case that one user can notify their own sockpuppet (e.g. I notify myself from my bot account). The only real problems I can imagine for cross-user postings is "privilege escalation" stuff:
    1. A user could cause the bot to notify users who have a protected talk page (at a protection level that the bot can access but not the user)
    2. A user could place many such templates in a single edit, causing multiple notices to be sent in a short time (while not being caught by rate limits on the servers)
  3. I do not think there is any legitimate-use reason for restricting the number of notifications. There might be a technical reason of avoiding to have large amounts of pending notifications depending on how the bot works (see previous discussion) or counter+vandalism reasons (e.g. allowing not only self-notifications, but only X pending notifications originating from a single user at a given time, so that a spam-notifier vandal cannot get far). If we do not allow cross-user notifications, I think we can go without rate limiting until the performance becomes an issue.
  4. The bot request page is not watched a lot, but I am not sure where else it can go. Maybe worth cross-posting to WP:VPP?

(Ping: Fish and karate) TigraanClick here to contact me 13:36, 25 November 2018 (UTC)

Answering the set of questions (this is as I see the bot working - and note when it comes to this kind of thing I'm a vision man, not details!)
  1. What kind of notification?
    A message on your talk page ("Hi (user name), here's the reminder you asked for - {link)") People could, I guess, if they prefer, have the bot post to a defined sub page (I would see this as a "phase 2" development). A small, unobtrusive ping might also work but that requires an edit and I can see a busy thread that lots of people are interested in being peppered with these pings being an unpopular choice. Keeping it to the user's own talk page is less imposing on other users.
  2. Can Alice ask for Bob to be notified?
    No. Alice can only ask for Alice to be notified. Let's keep it simple, at least initially.
  3. Should we rate limit (and if so how?)
    Initially I think it's not a terrible idea to ensure the capability is there in the code to throttle it in case it starts causing (as yet unforeseen) issues. The bot can just refuse to provide more than X reminders a day if issues arise.
  4. Where, if anywhere, should we get consensus for all that?
    Wikipedia:Bots/Requests for approval to sign off the bot, presumably. I think WP:VPP for suggestions would also be good.
A note to say thank you, Tigraan; I appreciate the thought and effort you're putting into this. Fish+Karate 09:23, 26 November 2018 (UTC)
@Fish and karate: About 4: if we go to BRFA with the whole agreement of two of us, they are going to tell us to get consensus that the task is useful somewhere else. Per the guide at WP:BRFA: If your task could be controversial (e.g. (...) most bots posting messages on user talk pages), seek consensus for the task in the appropriate forums. (...) Link to this discussion from your request for approval. Again, VPP is the catch-all, but that's because I have no other idea. Maybe a link from the talk page of WP:PING as well, since the functionality is closely related.
(Oh, and save your thanks for after the bot sees the light of day.) TigraanClick here to contact me 15:31, 26 November 2018 (UTC)
Tigraan, I would absolutely ask for a well-advertised discussion with consensus for this bot, if I came across the BRFA. I think it's an idea I would use (I do on reddit!), but I could see it becoming unintentionally disruptive (Let's say - 50 people use the "remindme" template on a popular arbcom case or RFA). I'm not sure what the best way to address that would be. SQLQuery me! 22:37, 3 December 2018 (UTC)
I see the point, perhaps this will remain a pipe dream. If someone can think of a way to work around that, that would be welcome. Fish+Karate 15:07, 6 December 2018 (UTC)
Since m:Community Wishlist Survey 2019/Notifications/Article reminders was #8 on the wishlist, hopefully it gets implemented in a way that works more like the watchlist: click a button, and it sends you a notification when the time is up without having to put a template on the page for everyone else to be annoyed by. Anomie 02:03, 4 December 2018 (UTC)

)

Remove Template:TheFinalBall

We please need a bot to remove (and NOT subst; the source is non-RS) all entries of {{TheFinalBall}} following the consensus at Wikipedia:Templates for discussion/Log/2019 February 13#Template:TheFinalBall, and save @Zackmann08: from removing it manually (as they have been doing). GiantSnowman 20:54, 13 March 2019 (UTC)

GiantSnowman, LOL!!! Its all good my friend. I'm using a user-script so it isn't all the time consuming. Plus I'm almost done. :-) Zackmann (Talk to me/What I been doing) 20:56, 13 March 2019 (UTC)
@Zackmann08 and GiantSnowman: You sure? I could have that bot ready in probably 20min. --TheSandDoctor Talk 21:06, 13 March 2019 (UTC)
TheSandDoctor, yes but how long for the BRFA? lol. If you want to I certainly won't stop you! Zackmann (Talk to me/What I been doing) 21:07, 13 March 2019 (UTC)
@Zackmann08: Good point. I don't think any other BAGs are around right now. I could trial it if someone else made it and probably approve, but can't do that for myself for obvious reasons. Your call. --TheSandDoctor Talk 21:09, 13 March 2019 (UTC)
TheSandDoctor, if you don't have anything else on your to do list, go for it! Zackmann (Talk to me/What I been doing) 21:10, 13 March 2019 (UTC)
@Zackmann08: In that case I shall start work on it now and check back in shortly. --TheSandDoctor Talk 21:11, 13 March 2019 (UTC)
TheSandDoctor, have fun! Zackmann (Talk to me/What I been doing) 21:12, 13 March 2019 (UTC)
@Zackmann08: Just to clarify: all instances of the template are to be removed and not replaced with anything. Correct? --TheSandDoctor Talk 21:13, 13 March 2019 (UTC)
TheSandDoctor, correct. The TFD basically decided that it wasn't a reliable source. Zackmann (Talk to me/What I been doing) 21:24, 13 March 2019 (UTC)
@TheSandDoctor: I'll file a BRFA in 2 minutes and can run it later today --DannyS712 (talk) 21:54, 13 March 2019 (UTC)
@DannyS712: I just filed it BRFA filed --TheSandDoctor Talk 21:54, 13 March 2019 (UTC)
@TheSandDoctor: Oops, edit conflict, Wikipedia:Bots/Requests for approval/DannyS712 bot 15 --DannyS712 (talk) 21:57, 13 March 2019 (UTC)
@TheSandDoctor: If there is no-one else around to approve yours, do you want to look at mine? --DannyS712 (talk) 21:59, 13 March 2019 (UTC)
@DannyS712: SQL was able to trial it for me. Do you want me to delete yours? --TheSandDoctor Talk 22:03, 13 March 2019 (UTC)
@TheSandDoctor: Sure --DannyS712 (talk) 22:04, 13 March 2019 (UTC)
@DannyS712:  Done --TheSandDoctor Talk 22:05, 13 March 2019 (UTC)
There's only about 300 of these... this can be AWB-removed without a BRFA. --Izno (talk) 01:45, 14 March 2019 (UTC

Template:PBB Controls

Hey, based on this TfD the template {{PBB Controls}} should be removed. It currently has 4395 transclusions, so a bit much for AWB. Could anyone help with a bot? Thanks. --Gonnym (talk) 08:24, 20 March 2019 (UTC)

@Gonnym: BRFA filed --DannyS712 (talk) 13:31, 20 March 2019 (UTC)
Thanks Danny! --Gonnym (talk) 13:35, 20 March 2019 (UTC)

This is Y Done. BRFA has been approved Kadane (talk) 15:49, 20 March 2019 (UTC)

Ignore

Sorry ~Jer (TalkContributions) 12:17, 22 March 2019 (UTC)

STALE Drafts

This long standing project Wikipedia:WikiProject_Abandoned_Drafts/Stale_drafts would benefit from a bot to do two things we now do manually on the 40 or so remaining numbered subpages. 1. Remove red links to deleted articles 2. Remove links to pages that are redirects (page has been moved to mainspace, draftspace, or redirected to an article). 3. Pages that are now completely blank or have a userspace page blanked template. Once one of these three cases occurs the project no longer cares about the name of the page or the link to it. If a bot could sweep through the pages daily or even weekly this would save a ton of time manually removing red links and checking and removing links that are redirects. Even unlinking pages would make manually removing them from the lists much faster. If this is not clear look at the hisoty of any of the numbersd list pages to see the process. Legacypac (talk) 01:35, 24 February 2019 (UTC)

@Legacypac: I while ago I asked @Enterprisey about a script to help with that. Maybe we should wait and see if they have something already? --DannyS712 (talk) 01:45, 24 February 2019 (UTC)
I'm no bot expert but I know the pain involved in doing something that seems like it could be automated, and it's on the back end too so we don't need to worry about watchlists and readers. Legacypac (talk) 01:48, 24 February 2019 (UTC)
@Legacypac: In that case, it should be pretty easy to adapt User:DannyS712 test/page.js and User:DannyS712 test/dead.js to use the api to check if pages exist, and if not remove them (just deals with red links). Then, if they do exist, check if they are redirects, etc, and remove based on that. I don't have much time this week though...
alternatively, you could try to fork User:The Transhumanist/RedlinksRemover.js, but IDK --DannyS712 (talk) 02:00, 24 February 2019 (UTC)
@Legacypac: BRFA filed --DannyS712 (talk) 07:19, 19 March 2019 (UTC)

Categories for Discussion bot

Wikipedia:Categories for discussion is looking for a new bot to process category deletions, mergers, and moves. User:Cydebot currently processes the main /Working page, but there is a growing list of issues that call out for a replacement bot:

  1. Cydebot's default is to create a category redirect in most (though, oddly, not all) cases of renaming or merging. This is helpful in some cases (e.g. Swaziland → Eswatini) but unhelpful or downright wrong in most others, and it promotes future miscategorization (see examples here). Quite simply, a bot should not be creating thousands of category redirects without either more specific parameters or direct human guidance. Currently, this is substantially adding to the workload of the few admins who close CfDs and contributing to backlogs.
  2. Cydebot unexpectedly stalls on certain large runs.
  3. Cydebot no longer process the /Large and /Retain subpages.
  4. The bot's operator is no longer very active (just 4 edits last year), and therefore unable to address these issues.

At a minimum, the new bot should process the main /Working page:

  • Deleting, merging, and renaming (i.e. moving) categories, as specified, with appropriate edit summaries.
  • Deleting the old category with an appropriate deletion summary.
  • In the case of renaming, removing the CfD notice from the renamed category.

Ideally, it would also do some or all of the following:

  • Process the /Large and /Retain subpages.
  • Accept manual input when a category redirect should be created—for example, by recognizing leading text when a redirect is wanted, such as * REDIRECT [[:Category:Foo]] to [[:Category:Bar]].
  • Recognize and update category code in transcluded templates. This would need to be discussed/tested to minimize errors and false positives.
  • Recognize and update incoming links to the old category. This would need to be discussed/tested to minimize errors and false positives.

Your assistance would earn the gratitude of some very tired and increasingly frustrated CfD'ers.

Thank you, -- Black Falcon (talk) 20:48, 18 February 2019 (UTC)

@Black Falcon: I may be able to help - see my notes at the CfD talk page about a bot for tagging. A similar functionality would be to go through all of the pages in a category and recategorize them, or remove a category so it can be deleted. I'm really busy the next week, but I'm interested in working on this (though I would need someone else with a bit to operate it for the deletions, etc) --DannyS712 (talk) 20:55, 18 February 2019 (UTC)
ArmbrustBot is already approved for this (tasks 1 and 6); alerting the operator. {{3x|p}}ery (talk) 04:27, 19 February 2019 (UTC)
ArmbrustBot requires operator input to run, and it cannot perform admin actions. — JJMC89(T·C) 05:02, 19 February 2019 (UTC)
I'm willing to take this on. — JJMC89(T·C) 05:02, 19 February 2019 (UTC)
BRFA filed — JJMC89(T·C) 07:41, 24 February 2019 (UTC)
Thank you! -- Black Falcon (talk) 03:51, 4 March 2019 (UTC)

The task is rather simple. Find all pages with Foobar (barfoo). If they redirect to Foobar, tag those with {{R from unnecessary disambiguation}}. This should be case-sensitive (e.g. Foobar (barfoo)FOOBAR should be left alone).

Could probably be done with AWB to add/streamline other redirect tags if they exist. Headbomb {t · c · p · b} 13:15, 14 November 2018 (UTC)

How would you find these pages? Via a database dump and regular expressions I assume? --TheSandDoctor Talk 07:24, 1 December 2018 (UTC)
@TheSandDoctor: via a dump scan yes. Or some kind of 'intitle' search. Headbomb {t · c · p · b} 05:08, 6 December 2018 (UTC)
@TheSandDoctor: any updates on this? Headbomb {t · c · p · b} 20:20, 19 December 2018 (UTC)
@Headbomb: No, sorry. I had forgotten about this. You are only anticipating pages like your Footer example above, right? What I mean is: Joe (some text) redirecting to Joe would be tagged with {{R from unnecessary disambiguation}}? Or am I getting this completely wrong/missing something? --TheSandDoctor Talk 20:33, 19 December 2018 (UTC)
Not sure what you mean by my Footer example, but basically if you have Foobar (whatever)Foobar, then tag Foobar (whatever) with {{R from unnecessary disambiguation}}. Nothing else. Headbomb {t · c · p · b} 21:26, 19 December 2018 (UTC)
@Headbomb: That would've been autocorrect being sneaky. Foobar is what I meant (did it again writing this) and that does clarify it for me. I will work on this tonight or tomorrow. --TheSandDoctor Talk 00:13, 20 December 2018 (UTC)

@TheSandDoctor: any updates on this? Headbomb {t · c · p · b} 08:24, 13 January 2019 (UTC)

@TheSandDoctor:? Headbomb {t · c · p · b} 18:58, 9 March 2019 (UTC)
Or anyone else at this point, really. Headbomb {t · c · p · b} 19:45, 17 March 2019 (UTC)
Few issues. Some of these won't be disambiguation; for example, a lot of songs are commonly referred to in the style Name Not Actually Sung (Name everyone calls it) in track listings, and it'd be fairly reasonable to include the track listing version of the title as a redirect, even when it's not that. It's not impossible for our redirect convention to occasionally get valid alternative names for something. Adam Cuerden (talk)Has about 6.4% of all FPs 19:56, 17 March 2019 (UTC)

@Headbomb: what exactly are you looking for? Do you just want someone to do a database scan, or do you want a bot to fix this? It sounds like there might be context issues with a task list this per Adam Cuerden. Kadane (talk) 00:53, 18 March 2019 (UTC)

@Kadane: I want a bot to tag Foobar (whatever) with {{R from unnecessary disambiguation}} when Foobar (whatever) redirects to Foobar. I don't see much context-bot stuff here, although those would get discovered during trial if there are any. If so, "whatever" could be limited to a 'sureshot' disambiguator like \(.*(album|song|journal|magazine|publisher)\) and the like. Headbomb {t · c · p · b} 01:02, 18 March 2019 (UTC)
@Headbomb: I will look into this later this week if someone else hasn't picked it up. Kadane (talk) 05:45, 18 March 2019 (UTC)

@Headbomb: I couldn't sleep tonight and ran a database query to find all redirects that end with parenthesis. I have a question about how the bot would handle a few cases.

  1. "C" Is for (Please Insert Sophomoric Genitalia Reference HERE) -> "C" Is for (Please Insert Sophomoric Genitalia Reference Here)
  2. Babbacombe Lee (album) -> "Babbacombe" Lee
  3. 0N (disambiguation) -> 0N (Which contains a {{R from disambiguation}})

I am assuming the bot would skip the page if any {{R from ...}} templates were present? Should the bot ignore any characters such as ", ★, or *? If a page ends in (disambiguation) should it be tagged with {{R from disambiguation}} or {{R from unnecessary disambiguation}}? Once I have a better idea of the criteria I will put together a script to estimate the number of articles that will be edited. Kadane (talk) 08:40, 18 March 2019 (UTC)

1) Shouldn't be tagged, 2) Should be tagged with {{R from unnecessary disambiguation}}, if bot logic can handle that 3) Should tagged with {{R to disambiguation page}}. Headbomb {t · c · p · b} 08:47, 18 March 2019 (UTC)

@Headbomb: - Okay, from my understanding there are two cases. Foobar (^disambiguation) -> Foobar and Foobar (disambiguation) -> Foobar. {{R from unnecessary disambiguation}} should be added to the ^disambiguation cases, and {{R to disambiguation page}} to the disambiguation cases (if the template is missing). In that case there are 207 pages that need the template {{R to disambiguation page}} and there are 55,824 pages that need {{R from unnecessary disambiguation}}. If my understanding sounds correct I am ready to go to BRFA. Looking forward to your reply. Kadane (talk) 05:34, 19 March 2019 (UTC)

P.S. Should pages that match this criteria that also have {{R from incomplete disambiguation}} be ignored or tagged as well? Example Zoom (song), Zoom (album), Zoom (TV series), Zoom (film), Zoom (2016 film), Zoom (TV channel). Kadane (talk) 06:56, 19 March 2019 (UTC)
I'm afraid I don't understand what you mean by Foobar (^disambiguation) -> Foobar and Foobar (disambiguation) -> Foobar. Skip those with {{R from incomplete disambiguation}}. Headbomb {t · c · p · b} 06:59, 19 March 2019 (UTC)
@Headbomb: - I am using ^ as not. So (^disambiguation) means that the page doesn't end in (disambiguation). Foo (actor) would fall under ^disambiguation, where as Foo (disambiguation) would fall under the disambiguation case. User:KadaneBot/Sandbox has a list of 1000 pages the bot would change, as well as the template the bot would add. Kadane (talk) 07:22, 19 March 2019 (UTC)

BRFA filed @Headbomb: Kadane (talk) 16:13, 19 March 2019 (UTC)

Removal of former infobox parameter

{{infobox cricketer}} used to have a |deliveries= parameter which was removed in 2009. There are 7000ish pages using the parameter, which makes up the overwhelming majority of the unknown parameter tracking category. I made a start on clearing them off with AWB but figured it may be quicker to get a bot to do them. There is a list of possibly affected pages if that helps, and I did a regexp replace of \|\s*deliveries\s*=.*\n (with nothing) as part of some more targeted clean-ups. Could a bot please remove the rest? Spike 'em (talk) 14:42, 13 March 2019 (UTC)

Infobox film category? Also I note that {{Infobox cricketer}} has |deliveries1=... |deliveries4= in them. This may not be a straightfoward removal (which could very well be against WP:COSMETICBOT on its own). Headbomb {t · c · p · b} 14:56, 13 March 2019 (UTC)
Sorry, I c+p from the wrong place on another talk page, have amended the category above. The RegExp above has not picked up the deliveries1-4, but does find the unnumbered version. The page you link to says that changes that help with the "administration of the encyclopedia" are substantive, which may apply here. Spike 'em (talk) 15:41, 13 March 2019 (UTC)
@Spike 'em: So just to clarify, replace every instance of the deliveries parameter with a blank string? I can do it, and will file a BRFA in the next few days as long as Headbomb has no objection to the "cosmetic"-ness of this task (they haven't replied yet) --DannyS712 (talk) 16:03, 13 March 2019 (UTC)
Yes. There are a mixture of values of the parameter out there, but the vast majority are |deliveries=balls so replacing those with blank string would leave few enough for me to do by hand. Spike 'em (talk) 16:13, 13 March 2019 (UTC)
@Spike 'em: Okay. I'll do this, pending any objection about the cosmetic aspect --DannyS712 (talk) 16:21, 13 March 2019 (UTC)
If it is filling up a tracking category it probably should be addressed one way or another manually or by bot, and easier by bot. BTW is the request to delete the entire key=value or only the value portion of the string? At first it sounded like it was to delete the whole key=value but now it sounds like only delete the value, and only when it is "balls". Just wanted to clarify for you DannyS712. -- GreenC 16:33, 13 March 2019 (UTC)
It would be tidier to do the whole thing (key=value), but removing just the "balls" seems to remove the page from the tracking category. The latter comment was because 95% of the values listed are balls, so I could do the rest by hand if that made it easier (and in fact I think the non-balls ones need some other intervention). Spike 'em (talk) 16:50, 13 March 2019 (UTC)
I am sometimes 5 and balls. --Izno (talk) 16:53, 13 March 2019 (UTC)
Getting here late, but in response to the above: first, definitely remove the parameter and the value, not just the value. Second, if the link above does not help generate a list, you can just run through Category:Pages using infobox cricketer with unknown parameters. Third, technically it's not a cosmetic edit, because you'll be removing a hidden category (I know, it's pretty technical, but still). Removal of deprecated parameters is a pretty standard bot task. – Jonesey95 (talk) 17:52, 13 March 2019 (UTC)
It's not cosmetic at all. Without this task being done it's almost impossible for anyone to find the significant errors in the infobox. I've been working through other errors flags by hand and there are so many utterly random errors being introduced - sometimes it's a space (autocorrect may be an issue), sometimes it's a capital letter introduced, other times it's just something utterly random. With nearly 7,000 errors being caused by the deliveries parameter it's much harder to work through and solve the other issues. Blue Square Thing (talk) 20:14, 13 March 2019 (UTC)
@Spike 'em: BRFA filed --DannyS712 (talk) 07:23, 19 March 2019 (UTC)
Thanks Spike 'em (talk) 08:11, 19 March 2019 (UTC)

)

Remove living-yes, etc from talkpage of articles listed at Wikipedia:Database reports/Potential biographies of dead people (3)

Hi bot people. I was wondering whether it might be appropriate/worthwhile/a good idea to get a bot to remove "living=yes", "living=y", "blp=yes", "blp=y", etc from the talkpages of the articles listed at Wikipedia:Database reports/Potential biographies of dead people (3). I recognize that automating such a process might result in a few errors, but I think that would be a reasonable tradeoff compared to how tedious it would be for humans to check and update all 968 articles in the list one by one. (And hopefully, for those few(?) articles where an error does occur, someone watching the article will fix it). I spot-checked a random sample of articles in the list, and for every one I checked, it would have been appropriate to remove the "living=yes", etc from the talkpage, i.e. the article had a sourced date of death. To minimize potential errors, I would suggest the bot skips any articles which cover multiple people, e.g. ones with "and" or "&" in the title and Dionne quintuplets, Clarke brothers, etc. Thoughts? DH85868993 (talk) 12:53, 15 January 2019 (UTC)

That last might not be easy to bot-automate. Though, if instead of a bot we get a script, it would be possible to quickly deal with any multiples before running it. Adam Cuerden (talk)Has about 8.8% of all FPs 13:03, 15 January 2019 (UTC)
@DH85868993 and Adam Cuerden:
I think this should be easy, but just to clarify:
  1. Get all of the pages linked to from the report ([6], with a bit of regex magic)
  2. Remove titles containing "and" or "&"
  3. Convert to talk pages
  4. Edit each talk page to set "living=yes" etc to "living=no" etc
--DannyS712 (talk) 07:56, 23 February 2019 (UTC)
@DannyS712: Yes, that sounds right, with the exception that the following articles should also be removed from the list: Dionne quintuplets, June and Jennifer Gibbons, Podgórski sisters and Bacon Brothers (gangsters). DH85868993 (talk) 21:23, 23 February 2019 (UTC)
@DH85868993: take a look: User:DannyS712 test/dead.js will print to your console an array of all of the pages to edit (don't try this on other pages, it'll probably break) and User:DannyS712 test/isdead.js lets you, on a specific page, change the blp parameters. I think the only thing remaining is to file a BRFA to be able and combine them. What do you think? --DannyS712 (talk) 00:21, 24 February 2019 (UTC) ping fixed --DannyS712 (talk) 00:22, 24 February 2019 (UTC)
@DannyS712: When I click on the "dead blp" tab, I get a message "Line: 21 Error: 'get_page' is undefined" but that could be user error on my part (I'm not very experienced with javascript). I'm using IE11 on Win7 in case that matters. DH85868993 (talk) 00:57, 24 February 2019 (UTC)
@DH85868993: as far as I can tell its on your end; the get_page function is defined at User:DannyS712 test/page.js, which is imported. --DannyS712 (talk) 01:27, 24 February 2019 (UTC)
@DannyS712: If it works for you, that's good enough for me (I had a look at the script and it looked OK). Feel free to go ahead and file the BRFA. DH85868993 (talk) 01:46, 24 February 2019 (UTC)
@DH85868993: BRFA filed --DannyS712 (talk) 02:11, 24 February 2019 (UTC)
Thanks. DH85868993 (talk) 02:14, 24 February 2019 (UTC)

Population of Austrian municipalities

The Austrian metadata templates storing population figures were deleted with a consensus that they should be replaced by WikiData figures. I set up a new template for that, Template:Austria population Wikidata, and now it should be implemented (as in this diff) so that the updated figures can be displayed. Hopefully a bot can help with that.--eh bien mon prince (talk) 19:35, 9 March 2019 (UTC)

Hi eh bien mon prince, would it be possible to make a WikiData query listing names of articles on enwiki that are ready to use the template? -- GreenC 20:09, 9 March 2019 (UTC)
@GreenC: This should work: query.--eh bien mon prince (talk) 20:28, 9 March 2019 (UTC)
eh bien mon prince very helpful thanks, about 2100 article. Does anyone else want to do this? I can, but will give it a day or two for anyone else. Probably best done with AWB search/replace. -- GreenC 16:28, 10 March 2019 (UTC)
@GreenC: I'm adding area figures to the template as well, so waiting a day might actually be for the best.--eh bien mon prince (talk) 16:35, 10 March 2019 (UTC)
The area import is complete. If possible, it would be nice to add area_footnotes as well, as all the figures have a source now.--eh bien mon prince (talk) 09:55, 11 March 2019 (UTC)

BRFA filed -- GreenC 14:42, 12 March 2019 (UTC)

OSM location map for German districts

A zoomable, labeled location map can be included in the articles about German districts by adding {{Germany district OSM map|parent_subdivision=QXXXX}} to the 'map' parameter of {{Infobox District DE}}, where QXXXX is the Wikidata ID of the German state the district belongs to. A live example of the template can be see in the Nordfriesland (district) article.--eh bien mon prince (talk) 08:24, 12 March 2019 (UTC)

@Underlying lk: It should be doable, but please ensure that there is consensus to include such maps automatically. I'll note that, on my computer, the map didn't load at all until I opened it (i.e. it wasn't "embedded"), so there may be some opposition to this idea. Sorry, --DannyS712 (talk) 08:28, 12 March 2019 (UTC)
I applied similar maps to Austrian and Russian districts and they work well enough - maybe it's a matter of caching?--eh bien mon prince (talk) 00:37, 13 March 2019 (UTC)
@Underlying lk: now it works... thats odd. Either way, please see if there is consensus for this bot run --DannyS712 (talk) 00:40, 13 March 2019 (UTC)
I started a discussion on WikiProject Germany, let's see if it gets any feedback.--eh bien mon prince (talk) 00:44, 13 March 2019 (UTC)
@DannyS712: I can't get any replies from WP Germany, any suggestions on where I could seek consensus for the change?--eh bien mon prince (talk) 13:04, 15 March 2019 (UTC)
@Underlying lk: Not really, I'll try to work on this soon. Basically, if the template already has a map parameter, skip it, if it has a blank map parameter, add the template (use [7], [8], wikibase_item), and if it has no parameter, add both the parameter and the template, which is the tricky case. I'll let you know once I've filed a BRFA --DannyS712 (talk) 23:30, 15 March 2019 (UTC)
BRFA filed --DannyS712 (talk) 08:36, 16 March 2019 (UTC)
RFC might be overkill, not controversial similar maps in use and looks like a definite improvement. You could start an BRFA and then link to it from WikiProject Germany and give the BRFA some breathing time, not fast-track it, so editors have a chance to see it and comment like 30-45 days. Likewise the trial edit period will alert watchlists what is happening. -- GreenC 16:37, 15 March 2019 (UTC)

PBB Summary (and maybe PBB Further reading)

Hey, based on this TfD the template {{PBB Summary}} should be removed, and based on this extended discussion the removal should keep the |summary_text= text in the article. I'm not sure if |section_title= is used. Another editor is currently manually doing {{PBB Further reading}}, though a bot operation would be much more faster. If that is included also, then just remove the outer template code, leaving the actual citation templates in the article. Could anyone help with a bot? Thanks. — Preceding unsigned comment added by Gonnym (talkcontribs) 20:08, 20 March 2019 (UTC)

@Gonnym: see Wikipedia:Bots/Requests for approval/DannyS712 bot 23 --DannyS712 (talk) 20:09, 20 March 2019 (UTC)
You are a fast one Danny, thanks for already looking into this! --Gonnym (talk) 20:11, 20 March 2019 (UTC)
@Gonnym: I was prompted by User talk:DannyS712#Your BRFA (22), or I would never have noticed --DannyS712 (talk) 20:12, 20 March 2019 (UTC)

WP:APPLE Clerker

Hi, I might be able to have a bash myself but could someone help create code for a Python Bot that would get a list of users not on Wikipedia: WikiProject Apple Inc./Subscribe that are have edited the WP's articles at least 10 times in the last 90 days or has added our User Box to their userpage and add them to a mass message list at Wikipedia: WikiProject Apple Inc./To Welcome (this should be cleaned at each run) . It also should remove users who haven't Edited in the last 5 years from the first mailing list. The Bot should then create a mass message request's code to send out the newsletter and a welcome message to the lists for me to submit. I'd like to be able to run the bot myself. (Pinging User: Smuckola) Thanks, RhinosF1(chat)(status)(contribs) 20:32, 20 March 2019 (UTC)

@RhinosF1: - When you mean run it by yourself, do you mean that you want to be the bot operator or that you want to be able to trigger a run by editing a page? If you want to be the bot op: What is your experience level with Python? Would you be comfortable enough to fix any bugs that popped up with the bot in the future? Kadane (talk) 21:18, 20 March 2019 (UTC)
Kadane, I'm pretty comfortable with Python and should be as to fix most bugs, I've just not done a large amount with the API system and would like the code. I'll then use that to learn the syntax and tweak things as Ive askways learnt by changing other code and teaching myself. I'd like to be able to run the bot by just pressing run on the python script when I need it doing. l intend to run it through User:RF1_Bot RhinosF1(chat)(status)(contribs) 21:25, 20 March 2019 (UTC)
RhinosF1 - Okay. What happens after a user is added to WP:WikiProject Apple Inc./To Welcome, receives a message, gets removed from the list, stops editing for 90 days, and then meets the criteria again? Does that user get re-added to the welcome list or ignored? Also what happens if a user unsubscribes, has never been sent a welcome message, and meets the criteria for the welcome message? Kadane (talk) 22:05, 20 March 2019 (UTC)
Kadane, Good point, Would it be possible to have it save a list somewhere of users it has processed before? Not bothered whether it's on-wiki or locally. RhinosF1(chat)(status)(contribs) 22:21, 20 March 2019 (UTC)
@RhinosF1: Its possible, but the behavior would need to be well defined before development starts. Kadane (talk) 22:46, 20 March 2019 (UTC)
Kadane, I'd preferably like any user it has added to a list (or removed) before to be added to a check page that it can consult. If possible, it should Scroll through the revision history of the mass message lists looking for anyone who has been on their before. RhinosF1(chat)(status)(contribs) 22:52, 20 March 2019 (UTC)
I think what you're looking for is basically already done: Wikipedia:WikiProject Directory/Description/WikiProject Apple Inc., if that seems reasonable to you. --Izno (talk) 00:01, 21 March 2019 (UTC)
That aside, your request looks like it will need community consensus. I was not happy to receive what was basically spam to my talk page recently based on a tenuous connection to the project. --Izno (talk) 00:03, 21 March 2019 (UTC)
Agree with Izno. Once was enough. Any additional spamming should be done manually. – Jonesey95 (talk) 03:26, 21 March 2019 (UTC)
@Kadane:, would it be possible to still take users that haven't edited in last 5 years from those lists and maybe use the link Izno gave me to pick 30 accounts to welcome. Still include the checkpage and the actually welcoming is done manually. Izno we've removed you from the list but I was planning on after any more than 3 newsletters including a confirmation that you wish to still receive messages. @Jonesey95: Any suggestions? RhinosF1(chat)(status)(contribs) 21:39, 21 March 2019 (UTC)
@RhinosF1: - Yes all of it is possible. Since there has been concern raised here, you should probably seek consensus for this task before this bot request proceeds. Have you thought about listing your request at the appropriate noticeboards to see if the community is okay with this task running? Kadane (talk) 23:39, 21 March 2019 (UTC)
Kadane, I'm happy to got to a noticeboard. Would WP:VP be okay? RhinosF1(chat)(status)(contribs) 06:29, 22 March 2019 (UTC)

RhinosF1 WP:VP is a good place to start. I am not the one that will judge consensus. That will be up to someone in the bot approvals group, but that is where most go to gain consensus for a bot task. Kadane (talk) 21:33, 22 March 2019 (UTC)

Kadane, I'll start that on Sunday. I'm going to take a mini WikiBreak tommorow. RhinosF1(chat)(status)(contribs) 21:35, 22 March 2019 (UTC)
Posted at VPP RhinosF1(chat)(status)(contribs) 07:50, 24 March 2019 (UTC)
Declined Not a good task for a bot. apparently RhinosF1(chat)(status)(contribs) 19:50, 24 March 2019 (UTC)

Bot that can belong to a user

I want a bot that can belong to a user, specifically me because I am making this request. Since I’m making this request, the name of my bot would be “MetricSupporter89Bot”. — Preceding unsigned comment added by MetricSupporter89 (talkcontribs) 23:17, 21 March 2019 (UTC)

Some more stuff I was to add were that it would contribute stub articles I made that would take too long for me to contribute, edit my user page to be like other users pages, etc. Metric Supporter 89 (talk) 23:26, 21 March 2019 (UTC)

This page is used to request a bot to do a task. If you are making your own bot and wish to obtain approval to run it you should look at WP:BRFA. You should also ready the Bot Policy before making a request as well. Kadane (talk) 23:37, 21 March 2019 (UTC)

JeriBot

This bot would edit articles that would need citations & that would need checking over for mistakes in the information, such as "Earth is the only planet that has life" where it should be "Earth is the Only known planet to have life"--  Jeriqui123 ~~ Talk 12:04, 25 March 2019 (UTC)

@Jeriqui123: please stop making vague bot requests that have no chance of ever being approved. Headbomb {t · c · p · b} 12:06, 25 March 2019 (UTC)

Bot capable of marking pages for speedy deletion

I would like to see a neural network capable of marking pages for speedy deletion.

Here is a list of criterion I believe that the bot could handle:

  • G: G1, G2, G3, G8, G10, G11, G13, G14
  • A: A1, A3, A11
  • R: R2, R4
  • F: F2, F4, F5, F7, F10
  • C: C1
  • U: U5
  • T: T2
  • P: P1, P2
  • X: None

Thanks InvalidOStalk 18:09, 27 March 2019 (UTC)

@InvalidOS: Declined Not a good task for a bot. it is very rare for bots to automatically delete pages. The few approved bots that delete pages are User:RonBot, User:AnomieBOT III, and User:Cydebot, and the situations in which they do this deletion is very specific. Having a bot tag pages for speedy deletion is equally problematic. --DannyS712 (talk) 18:41, 27 March 2019 (UTC)
One thing with neural nets is that they fail a lot before getting it right. While you could certainly build a neural net to help identify potential CSD targets, tagging for deletion is still something that needs human review. Headbomb {t · c · p · b} 18:51, 27 March 2019 (UTC)
We have 15+ years worth of training data, assuming machine learning is even a viable approach. -- GreenC 21:41, 27 March 2019 (UTC)
@GreenC: but you would need an admin account to access that data, right? --DannyS712 (talk) 21:44, 27 March 2019 (UTC)
That is one way. Another is download every month's prior XML dump from archive.org and parse which articles contain speedy delete templates and save those into a database by speedy type. It wouldn't be complete but a large number. Going forward monitor the speedy template backlink additions and save those articles for a year or two. -- GreenC 00:08, 28 March 2019 (UTC)
See mw:ORES#Curation support: using ORES, the new pages feed does note potential G3, G10, and G11 candidates. Galobtter (pingó mió) 19:01, 27 March 2019 (UTC)

P1 and P2 are rare. While trying to upgrade P2 some people are saying Admins can't judge P2 as it is, so how could a bot? Legacypac (talk) 21:50, 27 March 2019 (UTC)

I think this should be a  Not done per above. OP, if you are interested in a network leaving the decision to human hands, see Galobtter's comment, and consider getting involved in that project. --Izno (talk) 23:33, 27 March 2019 (UTC)

There are several links to World Matchplay (darts)#Previous_incarnation and pipe links. But this section no longer exists since it has been expanded into its own page MFI World Matchplay. Can we get a bot to update the links to the new page? It exists on (probably) hundreds of dart player pages and more. DLManiac (talk) 16:32, 29 March 2019 (UTC)

@DLManiac:  Doing... with AWB --DannyS712 (talk) 16:41, 29 March 2019 (UTC)
@DLManiac: I only found 8 links, so I changed those ([9]). Why did you think there were so many? --DannyS712 (talk) 16:44, 29 March 2019 (UTC)
Thank you! It looks like you may have missed the ones that have an underscore? for example at Keith Deller. I was just trying to estimate based on the fact that would exist in the performance timelines for most players of that time period. DLManiac (talk) 16:53, 29 March 2019 (UTC)
@DLManiac: I just tried it with the regex World( |_)Matchplay( |_)(darts)#Previous( |_)incarnation and didn't find any more. --DannyS712 (talk) 16:57, 29 March 2019 (UTC)
@DannyS712: well it definitely exists at Keith Deller#Performance timeline. Are you saying that's the only one that came up? DLManiac (talk) 17:03, 29 March 2019 (UTC)
@DLManiac:  Doing... - I forgot to escape the () --DannyS712 (talk) 17:18, 29 March 2019 (UTC)
@DLManiac:  Done including Keith Deller. Sorry, --DannyS712 (talk) 17:19, 29 March 2019 (UTC)
Thanks! Sorry if it wasn't enough. I was expecting more. Cheers! DLManiac (talk) 17:26, 29 March 2019 (UTC)
@DLManiac: No problem. Let me know if I missed any others --DannyS712 (talk) 17:33, 29 March 2019 (UTC

In order to reduce the load on the Signpost staff, it would really be nice if we could have a bot that would synchronize drafts with the newsroom.

If something exists at Wikipedia:Wikipedia Signpost/Next issue/Foobar, make a correspondence between the parameters of {{Signpost draft}} present on the draft page and those present at Wikipedia:Wikipedia Signpost/Newsroom#Article status.

Specifically

Draft parameters Newsroom parameters
|title=foobar |Has-title=yes
|blurb=foobar |Has-blurb=yes
|Ready-for-Copyedit=foobar |Ready-for-Copyedit=foobar
|Copyedit-done=foobar |Copyedit-done=foobar
|Final-approval=foobar |Final-approval=foobar

The second thing the bot should do is if an irregular column is found to exist at Wikipedia:Wikipedia Signpost/Next issue/Foobar, then copy the corresponding item from ...Newsroom#Irregular columns and paste it at the bottom the bottom of ...Newsroom#Article status. And then keep it synchronized like the other things in ...Newsroom#Article status.

The bot could review the relevant pages every 15 minutes or so (or whatever time interval people think is best). Headbomb {t · c · p · b} 18:09, 2 April 2019 (UTC)

@Headbomb: So the only page that would be edited in the newsroom page? In that case, you might not need a bot - I *should* be able to make a user script to update it (at least the first part of the parameters, don't know about the second idea). --DannyS712 (talk) 18:26, 2 April 2019 (UTC)
@DannyS712: Well, a script would still need to be manually triggered. That would be an improvement over the current situation, but the idea here would be to automate things so that if people work on things over night, the newsroom has already updated in the morning (or whenever they return). Headbomb {t · c · p · b} 18:28, 2 April 2019 (UTC)
@Headbomb: then nevermind --DannyS712 (talk) 19:13, 2 April 2019 (UTC)
@Pppery and Trappist the monk: Both of you have played around with doing full-page parsing in Lua before I think. Thoughts? --Izno (talk) 21:18, 2 April 2019 (UTC)
All of this request except for the part about irregular sections is a straightforward usecase for {{Template parameter value}}. {{3x|p}}ery (talk) 21:21, 2 April 2019 (UTC)
Thanks for the ping. But, this isn't my cup of tea.
Trappist the monk (talk) 21:40, 2 April 2019 (UTC)
There shouldn't be any Lua magic here, plain regex should be able to do the trick for parsing. @Anomie: runs a similar bot for WP:BRFAs actually, I believe, maybe they'd be interested. Headbomb {t · c · p · b} 21:43, 2 April 2019 (UTC)
You do realize that the lua magic is to have the page update automatically without the need for a bot. {{3x|p}}ery (talk) 21:56, 2 April 2019 (UTC)
Ah, well, that's an option I didn't consider. It's certainly a very interesting option worth exploring. The irregular column option of the bot wouldn't be needed if that's the case, since that someone could easily copy-paste the code to from the irregular section to the main section. We'd have 'status' updates in the irregular section's collapsed box, but no one would care about that, since it's collapsed anyway. Headbomb {t · c · p · b} 22:04, 2 April 2019 (UTC)

Lua magic implemented, this is no longuer needed. I'll be archiving this. Headbomb {t · c · p · b} 23:56, 2 April 2019 (UTC)

Vand3lsB0T

To revert vandalism and disruptive behaviour and help users. — Preceding unsigned comment added by Hurricane Bunter (talkcontribs) 12:24, 5 April 2019 (UTC)

Redundant User:ClueBot NG exists. What is it with these generic requests lately? Dat GuyTalkContribs 12:26, 5 April 2019 (UTC)
It is not sincere the editor is blocked for disruptive behavior. -- GreenC 14:04, 5 April 2019 (UTC)

Renamed images -

https://en.wikipedia.org/wiki/Wikipedia:Database_reports/Unused_file_redirects

Contains a small number of images that were renamed, but where article links were not updated.

Although not essential, updating image links helps avoid conflicts with Commons, and of the 'wrong image' being displayed in articles.

Would it be possible for a BOT to do this kind of repetitive check, update, refresh cycle, until there are no image links to redirects in File: namespace from Articles or other important pages.? ShakespeareFan00 (talk) 11:49, 16 March 2019 (UTC)

It is not proposed that a BOT flag file redirects for deletion, as these are sometimes retained for the benefit of external projects/sites that may be linking to the OLD name. ShakespeareFan00 (talk) 11:49, 16 March 2019 (UTC)
Would it be considered cosmetic to change the name of the redirect if there isn't a conflict with Commons? Is your request only to make those changes or to change every page linked in the Unused_file_redirects? Kadane (talk) 14:44, 16 March 2019 (UTC)
It would be cosmetic, but here I think the benefits (future proofing) would outweigh the drawbacks. A VP discussion would be needed though. Headbomb {t · c · p · b} 19:43, 17 March 2019 (UTC)
@ShakespeareFan00: - Would you like to start a VP discussion on this request? Kadane (talk) 15:52, 20 March 2019 (UTC)
@Headbomb and Kadane:, FYI: ShakespeareFan00 is on an enforced WikiBreak. RhinosF1(chat)(status)(contribs) 22:24, 21 March 2019 (UTC)
They might be on a Wikibreak, but I don't really see the 'enforced' part, nor would it preclude a bot op from wanting to take on this task. Headbomb {t · c · p · b} 00:40, 22 March 2019 (UTC)
Headbomb, Kadane asked them to go to VP on Wednesday. RhinosF1(chat)(status)(contribs) 06:30, 22 March 2019 (UTC)
Headbomb, Kadane asked SF to go to VP. That can't happen as he is unable to log in until 1 April as he has WikiBreak Enfoo applied until then. RhinosF1(chat)(status)(contribs) 06:32, 22 March 2019 (UTC)
Withdrawn- Per WP:COSMETIC, but with no prejudice to another contributor thinking otherwise. ShakespeareFan00 (talk) 23:14, 31 March 2019 (UTC)

Broken ref tag report bot

A bot that reports broken ref tags to a user, so he/she can fix it. — Preceding unsigned comment added by Darkwolfz (talkcontribs) 04:48, 6 February 2019 (UTC)

@Darkwolfz: Can you give a few examples? (I know they exist, but I haven't analyzed in depth why they appear broken). Thanks, --DannyS712 (talk) 04:55, 6 February 2019 (UTC)

Sure DannyS712 For example if an article have a <ref> and the editors maybe used source editor, and that cause a backspace or enter in ref tag, which will make it broken, or editors giving wrong parameters, for example I found a article today where they entered url correct, but Instead of giving website name, they added url. So if there's a bot which can detect broken ref tags or hyperlinks, and report it to me, I can fix them. Darkwolfz (talk) 05:02, 6 February 2019 (UTC)

@Darkwolfz: What I meant was can you link to a few articles so I know what to scan for? --DannyS712 (talk) 05:10, 6 February 2019 (UTC)
Or couldn't you just look through the pages in Category:Pages with incorrect ref formatting and Category:Pages with broken reference names? --DannyS712 (talk) 05:12, 6 February 2019 (UTC)

DannyS712 https://en.wikipedia.org/wiki/Formby_Hall In it's recent history, I fixed an error like that, and maybe we should scan for source that are in red color between <ref>...</ref> or a missing opening <ref> or closing </ref>, also reference title missing ones.

@Darkwolfz: Did you see the categories I linked to above? --DannyS712 (talk) 05:32, 6 February 2019 (UTC)
@DannyS712: yes, but almost all of then have title errors, and if we could just find tag errors, it'd be great. As in Missing <ref> or </ref>
@Darkwolfz: what about Category:CS1 errors: external links? --DannyS712 (talk) 05:48, 6 February 2019 (UTC)

Yes it helps a bit, but is it possible to find articles which doesn't belong to the category, as in a new error made by someone accidentally. And filter missing <ref> tags?

The article before you edited it (https://en.wikipedia.org/w/index.php?title=Formby_Hall&oldid=871268351) was in the CS1 category - can you give an example of an article with the error you're thinking of that isn't in one of the above-mentioned categories? --DannyS712 (talk) 05:58, 6 February 2019 (UTC)
This may not be what the OP intended, but it would be very useful to have a category or report for broken Harvard-style references. For example, in The White Negro, the short reference "Manso 1985" does not link to a full citation. An individual editor can use User:Ucucha/HarvErrors.js to make these references appear in red, but I do not know of a report or category that systematically lists articles where such errors are present. A set of reports, including individual reports for FAs and GAs, would be useful for ensuring that articles have verifiable references. – Jonesey95 (talk) 15:46, 6 February 2019 (UTC)
@Jonesey95: I'll try to adapt the script you linked to --DannyS712 (talk) 06:08, 11 February 2019 (UTC)
It should be pretty easy, if there is a category (or list) of all pages using Harvard-style references, or an easy way to make one. Otherwise I would have to scan through all pages to find the ones with Harvard-style reference errors. --DannyS712 (talk) 06:14, 11 February 2019 (UTC)
You could start with something like this. – Jonesey95 (talk) 08:45, 11 February 2019 (UTC)
The {{sfn}} template isn't Harvard-style references, it's Shortened footnotes. Harvard-style references are parenthetical, as used on pages like Actuary. However, the two methods have a number of common features, primarily the separation of page number information from the long-form citation, with the association between the two being by means of a link formed from up to four surnames and a year. From my reading of the above, it is these links that need to be tested; and we have a script to do that, see User:Ucucha/HarvErrors. --Redrose64 🌹 (talk) 20:38, 11 February 2019 (UTC)
@Jonesey95: adapting it is a lot more complicated than I thought - I don't think I'll be able to do this. But, User:DannyS712 test/HarvErrors.js will give you an alert on every page that you visit that has these errors - don't know if you'll find that useful. --DannyS712 test (talk) 21:15, 11 February 2019 (UTC)

List of values used for Template:Tooltip

Could someone generate a list of values used for Template:Tooltip (the redirect, not Template:Abbr) in a table form, so it would be easier to see what needs to be converted to {{abbr}} per the result of this discussion? --Gonnym (talk) 14:20, 15 February 2019 (UTC)

Doing... Dat GuyTalkContribs 15:54, 15 February 2019 (UTC)
Gonnym I can't find what Kuznetsov-class aircraft carrier has that transcludes the template. Could you help me figure it out? Dat GuyTalkContribs 16:33, 15 February 2019 (UTC)
In addition, would you like me to also look if the articles that transclude the tooltip template also match the [Abbr/Abrrv/What is] templates? It seems like they're redirects. Also pinging @Amorymeltzer: fyi. Dat GuyTalkContribs 16:47, 15 February 2019 (UTC)
Regarding your second question, no need. Only {{Tooltip}} was discussed as deprecated in that discussion. Regarding the first issue though. Wow. I've looked over that article multiple times and inside the templates used on that page and I can't seem to figure out where tooltip is used. Nothing seems to be using it. --Gonnym (talk) 17:06, 15 February 2019 (UTC)
It was Template:Ukrainian ships, I've removed the use. ~ Amory (utc) 17:13, 15 February 2019 (UTC)
@Amorymeltzer and Gonnym: Before I finish it, does User:DatGuy/sandbox look good? Dat GuyTalkContribs 18:01, 15 February 2019 (UTC)
I can't speak for Gonnym, but one thing I think would be helpful (at least, how I was planning on thinking about it) is to know which of these are within the same template or table. I imagine that'd be harder to handle, but ideally many of the uses in mainspace could be replaced by a wrapper template for Module:Sports table, so knowing what the common pairings would be helpful. ~ Amory (utc) 18:07, 15 February 2019 (UTC)
That looks good. Do you think it is possible to list only unique pairings and the number of times it appears? So for example, list only once the "Ref."/"Reference". If this is possible, it will help in deciding if this is something that can be done with AWB or a bot. It will also make reading the table easier. If it can't, the current table still helps a lot though. --Gonnym (talk) 21:31, 15 February 2019 (UTC)
I don't believe that checking if it's inside a template/table is simple/worth the time, but I've made User:DatGuy/sandbox and User talk:DatGuy/sandbox. They will be updated with sandbox1 accordingly when article size exceeds limits. Dat GuyTalkContribs 23:45, 15 February 2019 (UTC)
Since I see (at least) two different "GD" entries in User talk:DatGuy/sandbox, I'm assuming you managed to get unique pairings right? If that is true, could you also add the 2nd argument column to this table? This is very helpful btw. I've already identified a few thousand easy replacements. --Gonnym (talk) 23:56, 15 February 2019 (UTC)
I'm not sure what the duplicate entries are actually. I've attempted a fix. Dat GuyTalkContribs 23:59, 15 February 2019 (UTC)
@Gonnym and Amorymeltzer: Well, seems like it's being a bit of a pain in the ass due to article size limits. It has calculated 24000 uses. The pages are User talk:DatGuy/sandbox and User:DatGuy/sandbox(0-22). Dat GuyTalkContribs 09:44, 16 February 2019 (UTC)
Yeah, it still has a lot of uses, but for example, just "Pts" alone has 10705 uses. Just reconfirming with you, are all "Pts" uses using the same second argument value? --Gonnym (talk) 09:47, 16 February 2019 (UTC)
No, they aren't. You could go through a few pages of the User: pages and look for it. Apologies. Dat GuyTalkContribs 09:48, 16 February 2019 (UTC)
Haha indeed! I was surprised you were so confident. Still, it's helpful to have. I've got a lot on my plate at the moment, but if I get a chance in the next month or so, I'll try and work on finding the uses that are the same (e.g. all the headers with W/D/L, those with W/D/L/Pts, etc.). ~ Amory (utc) 11:34, 17 February 2019 (UTC)

Hi. MOS:ACCESS#Text / MOS:FONTSIZE are clear. We are to "avoid using smaller font sizes in elements that already use a smaller font size, such as infoboxes, navboxes and reference sections." However, many infoboxes use {{small}} or the html code, especially around degrees earned (here's one example I corrected yesterday). I used AWB to remove small font from many U.S. politician infoboxes of presidents, senators, and governors, but there are so many more articles that have them. Here's an example for a TV station. I've noticed many movies and TV shows have small text in the infobox as well. Since I cannot calculate how many articles violate this particular rule of MOS, I would like someone to automate a bot to remove small text from infoboxes of all kinds. – Muboshgu (talk) 22:04, 20 December 2018 (UTC)

At least on my screen, your edit had no effect, because as far as I know, there is some sort of CSS style that limits infobox font size to a minimum of 85%. I am pretty sure I just saw that described the other day, but my searches for it have turned up nothing. Maybe someone like TheDJ would know.
If I am correct, that means that edits to remove small templates and tags from infoboxes would be cosmetic edits, which are generally frowned upon. However, there are a heck of a lot of unclosed <small>...</small> tags within infoboxes, along with small tags wrapping multiple lines, both of which cause Linter errors, so it may be possible to get a bot approved to remove tags as long as fixing Linter errors is in the bot's scope. I welcome corrections on the four things I got wrong in these four sentences. – Jonesey95 (talk) 23:58, 20 December 2018 (UTC)
It's not "cosmetic". It's an accessibility issue. In this version, the BS, MS, and JD in the infobox are smaller than 85%. – Muboshgu (talk) 05:47, 21 December 2018 (UTC)
FWIW, Firefox's Inspector tells me that "BS" in that version is exactly 85%. – Jonesey95 (talk) 10:29, 21 December 2018 (UTC)
Odd. That was not the assessment of User:Dreamy Jazz. [10] – Muboshgu (talk) 20:42, 22 December 2018 (UTC)
Fascinating. I just looked at the two revisions of Brian Bosma in Chrome while not logged in, and I definitely see a size difference in the "BS" and "JD" characters. So these would not be cosmetic edits after all, at least for some viewers using some browsers. (I have struck some of my previous comments.) – Jonesey95 (talk) 21:59, 22 December 2018 (UTC)
P.S. I found the reference to the small template sizing text at 85% at Template:Small. It looks like I may have misinterpreted that note. – Jonesey95 (talk) 01:42, 23 December 2018 (UTC)

@Jonesey95 and Muboshgu: Hello. Although the 85% font-size is defined, the computed value of the font-size is below 11.9px (it is 10.4667px). This is because font-size percentages work based on the parent container, not the document (see 1 under percentages). In this case the infobox has already decreased the font-size to 88% of the document, the font-size computed from the {{small}} tag will be 74.8% smaller than the rest of the document (0.88 * 0.85 = 0.748). This is the case in Firefox, Chrome, Edge (10.4px), Opera and Internet Explorer. This behaviour is the standard and so will be experienced in all browsers. Dreamy Jazz 🎷 talk to me | my contributions 10:46, 23 December 2018 (UTC)

Yes, here's a demo of what happens when percentages get enclosed by other percentages: Text Text Text Text Text . That goes to five levels, each being 95% of the enclosing element. --Redrose64 🌹 (talk) 12:42, 23 December 2018 (UTC)
That is helpful. I discovered that I have set my Firefox preferences to prevent the font size from going below 11 pt, which enforces MOS for me. But in Chrome, which I have left unconfigured, that text gets smaller. By all means, let's remove instances of <small>...</small> and {{small}} (and its size-reducing siblings) from infoboxes, both in Template space and in article space. – Jonesey95 (talk) 14:31, 23 December 2018 (UTC)
Yes, let's. Thanks for that clarification Jonesey95. – Muboshgu (talk) 15:46, 23 December 2018 (UTC)
I have been using AWB to help with this issue too. <small> and </small> cam be removed with a simple find and replace but the template is better dealt with using Regex. --Emir of Wikipedia (talk) 21:08, 3 February 2019 (UTC)
Is there a category and/or method of easily listing these questionable pages? Primefac (talk) 15:44, 10 February 2019 (UTC)
I think that Special:WhatLinksHere/Template:Small hiding links and redirects but showing transclusions might find what you want but not in a convenient list or category. When I was doing it in AWB I was just loading from the birth year categories. Emir of Wikipedia (talk) 15:58, 17 February 2019 (UTC)

The Wikipedia:Good articles/mismatches page details some conflicts with good articles and usually indicates a mistake of some sort that needs to be sorted out. Category:Good articles means that an article has the green spot that indicates it is classified as good, while Category:Wikipedia good articles are articles which have undergone a review. So the In Category:Good articles but not Category:Wikipedia good articles indicates that a good article symbol may be present on an article that has not actually undergone a review. Wikipedia:Good articles/all is a list of all good articles and is manually updated. The last two headings usually indicate articles that have not been added after passing a review or removed after being delisted.

This page was originally created by JJMC89 a year ago using AWB after I requested it. At the time it contained thousands of mismatches[11]. We have just resolved all those, mainly through the efforts of DepressedPer. I was hoping there could be a bot that would update the page periodically so we can keep on top of any further mismatches. I have tried running it myself through AWB, but the number of articles is too large to do in one hit. There was also an issue that articles that had been moved would show up as a mismatch if the name was different at the Wikipedia:Good articles/all page. Maybe there is a better workaround for this, the last time I just renamed the articles at the GA list but that was quite time consuming. Regards AIRcorn (talk) 04:47, 13 April 2019 (UTC)

@Aircorn: I can run it with AWB, my computer seems to be able to handle it. Do you know what AWB conditions they used? --DannyS712 (talk) 05:02, 13 April 2019 (UTC)
Thanks for the offer. I believe it is more a numbers thing than a power one (although my computer is certainly lacking the last one). From my understanding you need to be an administrator to run AWB above a certain number of entries. If I run it it misses a whole lot. As far as I can tell it was made by going to tools and using list comparer. You add each category as a source (or links in the case of Wikipedia:Good articles/all) and compare. It should show you which ones were unique in each list and which were common. The unique ones can then be saved and copied under the correct header in Wikipedia:Good articles/mismatches. It is not too laborious, but if I run it it only pulls 25000 from Category:Good articles when there are nearly 30000 entries. Also I don't know how to account for redirects. If you can figure out a better way to make it work it would be interesting to see how many new mismatches have occurred in the last year, but I was ultimately hoping for a more automated update process. AIRcorn (talk) 06:01, 13 April 2019 (UTC)
I would be happy to setup an automated cron-based bot on Toolforge do this. There would be no manual processes involved it would run at a set time and be totally hands off. It will pull the data via the API. This is something my tool wikiget does. I even have an example of how to do list compare in the documentation. -- GreenC 06:22, 13 April 2019 (UTC)
@GreenC: That sounds awesome. It probably doesn't need to run too often. I am not terribly tech literate, but let me know if I can help in some other way. AIRcorn (talk) 01:47, 15 April 2019 (UTC)
Ok following up at Wikipedia talk:Good articles/mismatches. -- GreenC 15:32, 15 April 2019 (UTC)
Petscan is your friend. In GA but not WGA; In WGA but not GA; in GA but not linked from Wikipedia:GA/All; linked from Wikipedia:GA/All but not in GA. You can get different output on the output tab. --Izno (talk) 13:34, 13 April 2019 (UTC)
Thanks. However, the first link has pulled all the articles in the category and the second none. Might be to do with one category being on the talk page and the other on the article page. AIRcorn (talk) 01:47, 15 April 2019 (UTC)

Fix 'background' in sortable tables

See background (pardon the pun).

The idea is to change the css element background to background-color (and other similar attributes) in sortable tables (example). Headbomb {t · c · p · b} 19:14, 5 March 2019 (UTC)

@Magioladitis: The checkwiki team could also get in on this. Headbomb {t · c · p · b} 19:18, 5 March 2019 (UTC)
I would be sensitive here to whether there is another background style declared, as background is shorthand for a number of attributes. Otherwise seems like a good idea. --Izno (talk) 22:44, 5 March 2019 (UTC)
Oppose as written. Changing background to background-style would break all existing uses, because background-style is not a defined property. See CSS Backgrounds and Borders Module Level 3 for examples of valid property names. --Redrose64 🌹 (talk) 13:03, 7 March 2019 (UTC)
@Redrose64:, amended. I meant background-color, not background-style. Headbomb {t · c · p · b} 16:34, 7 March 2019 (UTC)

Detect Hijacked journals

Stop Predatory Journals maintains a list of hijacked journals. Could someone search wikipedia for the presence of hijacked URLs and produce a daily/weekly/whateverly report? Maybe have a WP:WCW task for it too? Headbomb {t · c · p · b} 00:09, 4 February 2019 (UTC)

This is a good idea. Made a script to scrape the site and search WP, it found three domains in 11 articles. -- GreenC 16:50, 4 February 2019 (UTC)
Extended content
  • Emma Yhnell <snippet>wins BSA Award Lecture | News | The British Neuroscience Association". www.bna.org.uk. Retrieved 2018-10-11. Video of Emma Yhnell speaking on public engagement</snippet>
  • Catherine Abbott <snippet>Neuroscience Day 2018 | Events | The British Neuroscience Association". www.bna.org.uk. Retrieved 2018-04-15. "Funding Panel membership | NC3Rs". www.nc3rs</snippet>
  • Irene Tracey <snippet>Winners 2018 Announced! | News | The British Neuroscience Association". www.bna.org.uk. Retrieved 2019-01-04. Tracey, Irene; Farrar, John T.; Okell, Thomas</snippet>
  • John H. Coote <snippet>"Professor John Coote | News | The British Neuroscience Association". www.bna.org.uk. British Neuroscience Association. Retrieved 4 December 2017. "John</snippet>

@Headbomb: can post the report on a regular basis if there is a page. Script takes less than 20 seconds to complete so not expensive on resources. -- GreenC 17:02, 4 February 2019 (UTC)

@GreenC:, just a note, bnas.org/ ≠ bnas.org.uk/. Likewise, acjournal.in ≠ acjournal.org. Headbomb {t · c · p · b} 01:03, 6 March 2019 (UTC)
They were found with CirrusSearch (Elasticsearch) it got some close matches. -- GreenC 17:27, 7 March 2019 (UTC)
@GreenC: [12] is a better link than the above one for hijacked journals. It's pretty much the same as the old link, but this one is updated. In particular, there's an additional journal (Arctic, at the very bottom of the page).
There's a few place a report like that could be generated. Category talk:Hijacked journals seems as good a place as any. I'd suggest creating a section and just overwriting it every day (if there's a change). Headbomb {t · c · p · b} 17:46, 7 March 2019 (UTC)
How about WP:Hijacked journals / WP:HIJACKJOURNAL (an essay or how-to) that can define the meaning, describe the problem for wikipedia, link to external sites, and link to the bot-generated list as a sub-page. Nothing complicate but a central place for discussion and info that can be linked to from other pages. -- GreenC 18:08, 7 March 2019 (UTC)
If we have a dedicated page, Wikipedia:Reliable sources/Hijacked journals seems to be the natural place to me. Headbomb {t · c · p · b} 18:13, 7 March 2019 (UTC)
I don't want to mess with creating a sub-page in a Guideline document and the needed top-hat navigations etc. You can if you want let me know. -- GreenC 17:08, 8 March 2019 (UTC)

Bot to generate list of editor's creations which have been tagged for improvements

This would be useful for New Page Patrol: it would save us sending multiple messages about an editor's creations (which can cause upset) and show clearly what the problem is and what articles have been identified as needing improvements. This has been requested more than once of me by an editor and I've had to find and list them manually. It would also benefit other editors - I would love to look over which of my creations have tags and improve them. This would give creators (if they want to) the chance to make improvements and bring down the backlogs. Is it feasible? Thanks for looking into this, Boleyn (talk) 08:43, 9 March 2019 (UTC)

@Boleyn: Maybe ask at User talk:Community Tech bot? Currently, Wikipedia:Database reports/Editors eligible for Autopatrol privilege already tracks if a user's pages have been tagged, so they might be able to help (though the code is on github). That specific report is overseen by User:MusikAnimal (WMF), so pinging @MusikAnimal if they want to chime in. --DannyS712 (talk) 08:48, 9 March 2019 (UTC)
Thanks for the suggestions, DannyS712. Boleyn (talk) 08:57, 9 March 2019 (UTC)
I think this is better fit for an external tool, rather than a bot. I have debated for some time adding this functionality to XTools. The problem is the relevant maintenance categories are different on every wiki. I suppose we can just make them configurable. I'll look into it!
In the meantime, quarry:query/34173 is an example query you could use to find such articles. Note that this does not encompass all maintenance categories, just the major ones. You can fork the query and tweak it as desired. Best, MusikAnimal talk 18:42, 9 March 2019 (UTC)
Thaks, MusikAnimal, that's really helpful. Adamtt9, you may want to check this out, and thanks for raising the idea. Boleyn (talk) 08:28, 10 March 2019 (UTC)

CFDS tagging and listing for "eSports" categories

There was a request to move categories with "eSports" to "esports" per WP:C2D at WT:VG, but that list is sizable. Is there someone here who can take care of the listing and tagging? (Avoid the WikiProject assessment categories.) --Izno (talk) 18:04, 31 March 2019 (UTC)

@Izno: sure, I've added it to my current BRFA (WP:Bots/Requests for approval/DannyS712 bot 13) as a request for a trial. --DannyS712 (talk) 19:50, 1 April 2019 (UTC)
 Done --DannyS712 (talk) 22:31, 11 April 2019 (UTC)

Auto-archive IP warnings

I imagine it's fairly confusing for IP users to have to scroll through lots of old warnings from previous users of their IP before getting to their actual message. We have Template:Old IP warnings top (and its partner), but it's rarely used—thoughts on writing a bot to automatically apply it to everything more than a yearish ago? Gaelan 💬✏️ 16:21, 10 January 2019 (UTC)

Technically feasible and is a good idea, IMO. Needs wider community input beyond BOTREQ. -- GreenC 17:09, 10 January 2019 (UTC)
Brought it to WP:VPR. Gaelan 💬✏️ 19:50, 11 January 2019 (UTC)
@Gaelan: FYI it was archived a while back to Wikipedia:Village pump (proposals)/Archive 156#Auto-archive old IP warnings --DannyS712 (talk) 05:39, 12 March 2019 (UTC)

It seems like there is community support to implement this from the discussions. Should be open another discussion to iron out the implementation details? If there is consensus to do this task with a bot, I am willing to do it. Kadane (talk) 05:45, 15 March 2019 (UTC)

Taxa

Bot to create entry in the (english) Wikipedia Category: Plants described in (year)

Data to be taken from Wikidata to give the the year of publication of a taxon and create "Category:Taxa described in ()" within the(English) wikipedia taxon entry, if a wikipedia enty has been created. MargaretRDonald (talk) 22:55, 22 January 2019 (UTC)

@MargaretRDonald: why? is there any support for such mass-creation? --DannyS712 (talk) 06:04, 1 March 2019 (UTC)
@DannyS712: Currently we have "Category:Taxa named by x" when a user links to the category, he/she gets a ridiculously uninformative list, which fails to include many of the plants cuthored by x for which there are wikipedia articles. If there were some automatic creation of the category for a plant article, then the only reason that a plant would be missing from the list of taxa authored would be that there was no wikipedia article. As it stands, the category:Taxa named by x is ludicrously unhelpful. See for example, Category:Taxa named by Ferdinand von Mueller. (I put this up here in the hope that others might consider the issue and perhaps do something about it. MargaretRDonald (talk) 06:13, 1 March 2019 (UTC)
@MargaretRDonald: Is this part of the request below? --DannyS712 (talk) 06:16, 1 March 2019 (UTC)
Hi @DannyS712: They are related, but slightly different. It is always clear who named the taxon (the final author). It is somewhat less clear the year in which it was described: with some wikipedia editors choosing the year of first the first publication, while others consider that the person(s) who gave the current name should get the year of publication too, in that, they have perfected (refined) the description. Thus, in Decaisnina hollrungii (K.Schum.) Barlow, the year in which the plant is described has been given as that of the publication by [[K.Schum.}, but there is no doubt that the taxon was named by Barlow. (I am not sure what the wikipedia consensus is on this!!) MargaretRDonald (talk) 06:38, 1 March 2019 (UTC)

Bot to create category "Category:Taxa described by ()"

The bot would use the wikidata taxon entry to find the auhor of a taxon, and then use it again to find the corresponding author article to find the appropriate author category. (This will not always work - but will work in large number of cases. Thus, the English article for "Edward Rudge" corresponds to the category:"Category:Taxa named by Edward Rudge", and the simple strategy outlined here would work for Edward Rudge, Stephen Hopper and .... The category created would be an entry in the article. MargaretRDonald (talk) 23:08, 22 January 2019 (UTC)

@MargaretRDonald: why? is there any support for such mass-creation? Also, what do you mean by the category created would be an entry in the article, and do you want "described by" or "named by"? --DannyS712 (talk) 06:05, 1 March 2019 (UTC)
@DannyS712: 1. See my answer to your preceding question. 2. There are two categories related to authorship and publication: (i) Category:Plants described in (year), and (ii) Category:Taxa named by (author). You can see how they are used in (for example) Velleia paradoxa. For my money I am not sure that I would really want to know what plants were described in 1810, but I would certainly like, when clicking on Category:Taxa named by Robert Brown, to be getting a complete list of wikipedia articles for which this is true. (Hope this explains why I think it important) MargaretRDonald (talk) 06:26, 1 March 2019 (UTC)
@MargaretRDonald: So basically, add "Plants described in ___" and "Taxa named by ___" to all currently existing taxa pages if they are missing? --DannyS712 (talk) 06:35, 1 March 2019 (UTC)
Yes. That would be great. That is, "Category:Plants described in ___" and "Category:Taxa named by ___" to the end of the taxon page.. MargaretRDonald (talk) 06:39, 1 March 2019 (UTC)
@MargaretRDonald: is this at all related to the |authority parameter in {{Speciesbox}} and its ilk? That would make this a lot simpler... --DannyS712 (talk) 06:51, 1 March 2019 (UTC)
@DannyS712: For the author, yes. It is the parameter |authority in {{Speciesbox}}. The year is not. It is found associated with the basionym in Wikidata entry (an entry which is often missing from wikidata, but if it exists that would be the safest place to take it from). Most articles show the author of the basionym (the name in the brackets), bur have no taxonomy section and even when they do it is unstructured text... So probably the year of the description is in the too-hard basket. (But as I indicated, I find the year category somewhat less important..) MargaretRDonald (talk) 07:07, 1 March 2019 (UTC)

And if we were to do this the result would be that we would get, e.g., a list of accepted taxa named by John Lindley, and not a whole ragtag list of plants where the assigning of the initial genus is now considered incorrect. In achieving that we could be a far better resource than IPNI. MargaretRDonald (talk) 06:57, 1 March 2019 (UTC)

@MargaretRDonald: I don't think that wikipedia is going to be a better resource than IPNI for this field - ~maybe~ wikispecies? In any event, this task is beyond my abilities, but hopefully my questions have made it clearer to others what you are requesting. --DannyS712 (talk) 07:06, 1 March 2019 (UTC)
@DannyS712: Probably not a better resource, but an extremely useful resource should it list only accepted names. (IPNI lists everything for an author and it then can require checking every name in say tropicos or Plants of the world to find which of them are accepted, a considerable task.) MargaretRDonald (talk) 07:14, 1 March 2019 (UTC)
@MargaretRDonald: yeah, but a bot shouldn't tag those without the same or comparable sources... --DannyS712 (talk) 07:35, 1 March 2019 (UTC)
(Not sure what you are trying to say here..) MargaretRDonald (talk) 07:42, 1 March 2019 (UTC) (In any case my comment on lists of accepted species was little more than a throw-away comment.) I just find it frustrating that "Category:Taxa named by Robert Brown" is not remotely within cooee of being so. And if the parameter authority in the species box were to be used it might just come within cooee of being so. MargaretRDonald (talk) 07:42, 1 March 2019 (UTC)
I'm saying that unless the reliable sources are there, we shouldn't be adding the category, especially not with a bot --DannyS712 (talk) 07:45, 1 March 2019 (UTC)
There are many reliable sources, which usually agree, but like all things requiring man-power, they can be out of sync and people in disagreement. I think it would be better if we used wikidata (with whatever its errors) to populate these categories. The result would be better than the entirely misleading stuff we have now where almost none of the taxa named by a person show up because of the failure by humans to populate the categories. MargaretRDonald (talk) 15:36, 15 March 2019 (UTC)

Deal with links to split article (Batting average)

About 6 months ago Batting average was split into a short parent article about the concept of batting average across sports and 2 child articles Batting average (cricket) and Batting average (baseball) dealing with the specifics of the metric in the individual sports. Articles related to each sport still point to the parent article but should generally point to the sport specific one. After some searches using AWB, I found just over 15k links to Batting average. Using a recursive category search, I found that Category:Cricketers, Category:Seasons in cricket and Category:Years in cricket account for about 3k links and Category:Baseball players, Category:Seasons in baseball, Category:Years in baseball about 12k. There are about 300 remaining links in none of these categories, I am working through those manually with AWB. As an aside, a lot of the baseball players have a link in both an infobox and in article text. I had the cricketer infobox changed already, as that had a hardcoded link to the parent article.

The plan would be to replace

  • [[Batting average]] with [[Batting average (cricket)|]]
  • [[Batting average|foo]] with [[Batting average (cricket)|foo]]

in the first set of categories and

  • [[Batting average]] with [[Batting average (baseball)|]]
  • [[Batting average|foo]] with [[Batting average (baseball)|foo]]

in the second set. A lot of the non-piped links use lower-case, so don't know if that needs another set of rules. I'm also assuming that the pipe trick works in bot edits, otherwise the replacement text will need to be slightly expanded. I can provide the lists I created of the links to the article, of the categories and then intersections if this helps. Spike 'em (talk) 20:27, 1 April 2019 (UTC)

pipe trick works in bot edits It does outside of references and other tags. --Izno (talk) 20:42, 1 April 2019 (UTC)
Why not:
  • [[Batting average]] with [[Batting average (cricket)|Batting average]]
It would be standard, and less error prone for other bots/tools. -- GreenC 14:26, 2 April 2019 (UTC)
Sure, no problem with that. As I said, the above relies on the pipe trick and it should be no different for a bot to replace the string with a slightly longer one. Spike 'em (talk) 14:46, 2 April 2019 (UTC)
Another idea is a bot could word check for "cricket" in baseball articles and "baseball" in the cricket articles and log those aside. To help avoid cases where a cricket article might be talking about baseball (rare for sure). -- GreenC 15:03, 2 April 2019 (UTC)
Could do. I did find 8 (all Australian) cricketers who were in both playing categories and did them manually, so there may be more. Spike 'em (talk) 15:08, 2 April 2019 (UTC)
As per comments on the BRFA page, there are a few (but less than 1%) articles in each category that has mention of the other sport. Many of these are in hat-notes. Spike 'em (talk) 11:16, 9 April 2019 (UTC)
BRFA filed -- GreenC 00:26, 3 April 2019 (UTC)

Request to add "List of Medal of Honor in non-combat incidents" in 185 articles of recipients that received them.

Request to add "List of Medal of Honor recipients in non-combat incidents" in 185 recipients that are still dated with the old main's article's title. — Preceding unsigned comment added by XXzoonamiXX (talkcontribs) 04:02, 14 April 2019 (UTC)

@XXzoonamiXX: can you explain? Do you just want the redirects to be bypassed? (Replace the old title with the new title in links?) --DannyS712 (talk) 04:12, 14 April 2019 (UTC)
There are 185 persons with the old main article's title that I just recently changed so yes replace the old title of each recipient with the new title I changed in the "See also" sections. XXzoonamiXX (talk) 04:22, 14 April 2019 (UTC)
@XXzoonamiXX: the old title is now a redirect to the new page, so that's not needed. See Wikipedia:Redirect#Do not "fix" links to redirects that are not broken for more. --DannyS712 (talk) 04:30, 14 April 2019 (UTC)
I'm not talking about that, I'm talking about editing and changing the old title into a new one in many recipenets "See Also" sections. Otherwise, i'll give people impression that it's what the old title implied rather than clicking on it for a deeper subject. — Preceding unsigned comment added by XXzoonamiXX (talkcontribs) 04:51, 14 April 2019 (UTC)
In that case, I suggest getting consensus at the article's talk page first. I'd be happy to file a BRFA once there is clear support for the changing of the links. Also, please remember to sign your comments. Thanks, --DannyS712 (talk) 05:09, 14 April 2019 (UTC)

noLDS.orgBOT

The Church of Jesus Christ of Latter-Day Saints recently gave an announcement about the correct name of the church[1]. Because of this announcement, the church site has been changed from lds.org to ChurchofJesusChrist.org, and the newsroom from mormonnewsroom.com to newsroom.ChurchofJesusChrist.org. Most wiki pages still have the old site linked. I need a bot to go through and change al the links. The only thing to be changed is the domain. The rest of the URLs are the same.

Thanks, The 2nd Red Guy (talk) 14:50, 23 April 2019 (UTC)

@The 2nd Red Guy: I suggest posting at Wikipedia:Link rot/URL change requests instead --DannyS712 (talk) 16:32, 23 April 2019 (UTC)
Oh, thanks, @DannyS712:. The 2nd Red Guy (talk) 16:36, 23 April 2019 (UTC)

References

  1. ^ an announcement about the correct name of the church Nelson, President Russell M. (7 October 2018). "The Correct Name of the Church - By President Russell M. Nelson". www.churchofjesuschrist.org. Retrieved 23 April 2019.

Make Articles in Compliance with MOS:SURNAME

I've noticed that a lot of articles are not in compliance with MOS:SURNAME, especially in Category:Living people. I've manually changed a few pages, but as a programmer, I think this could be greatly automated. Any repeats of the full name, or the first name, beyond the title, first sentence, and infobox should not be allowed and replaced with the last name. I can help out in creating a bot that can accomplish this. InnovativeInventor (talk) 01:21, 21 March 2019 (UTC)

Just bumped into this: Wikipedia_talk:Manual_of_Style/Biography#Second_mention_of_forenames, so there should be detection of other people with the same last name. Additionally, this bot should intend to provide support for humans, not to automate the whole thing (as context is important). InnovativeInventor (talk) 03:57, 21 March 2019 (UTC)

@InnovativeInventor: Is this about the ordering of names in a category page, or about the use of names in the article prose? --Redrose64 🌹 (talk) 17:07, 21 March 2019 (UTC)
@Redrose64: This is about the reuse of names in the article prose and ensuring that the full name is only mentioned once (excluding ambiguous cases where the full name is necessary to clarify the subject of the sentence). InnovativeInventor (talk) 19:40, 21 March 2019 (UTC)
I don't like this, and I'm calling WP:CONTEXTBOT on it. Consider somebody from Iceland, such as Katrín Jakobsdóttir - the top of the article has
Or somebody from a family with several notable members - have a look at Johann Ambrosius Bach (which is quite short) and consider how it would look if we used only surnames: After Bach's death, his two children, Bach and Bach, moved in with his eldest son, Bach. --Redrose64 🌹 (talk) 21:05, 21 March 2019 (UTC)
@Redrose64: The idea is that this will be a human-assisted bot, not a completely automated bot. Just something that can speed up the process. I agree that it depends on the context. But, it would be nice to assist efforts to regularize articles that are not in compliance with MOS:SURNAME.InnovativeInventor (talk) 03:23, 22 March 2019 (UTC)
InnovativeInventor - Considering it will be human assisted, wouldn't it be better to include the functionality inside AWB or create a user script? Kadane (talk) 21:35, 22 March 2019 (UTC)
Kadane I think something that can crawl all of Wikipedia's bio pages would be better. Not sure though. I'm not familiar with the best way to help regularize all the bio pages. InnovativeInventor (talk) 23:46, 22 March 2019 (UTC)

A heads up for AfD closers re: PROD eligibility when approaching NOQUORUM

When an AfD discussion ends with no discussion, WP:NOQUORUM indicates that the closing admin should treat the article as one would treat an expired PROD. One mundane part of this process is specifically checking whether the article is eligible for PROD ("the page is not a redirect, never previously proposed for deletion, never undeleted, and never subject to a deletion discussion"). It would be really nice, when an AfD listing is reaching full term (seven days) with no discussion, if a bot could check the subject's page history and leave a comment on, say, the beginning of the listing's seventh day as to whether the article is eligible for PROD (a simple yes/no). If impossible to check each aspect of PROD eligibility, it would at least be helpful to know whether the article has been proposed for deletion before, rather than having to scour the page history. A bot here could help the closing admin more easily determine whether to relist or soft delete. More discussion here. czar 21:12, 23 March 2019 (UTC)

@Czar: preliminary thoughts:
  • not currently a redirect - detectable via api ([13])
  • Never previously proposed for deletion: search in past edit summaries?
  • Never undeleted - log events for the page ([14])
  • Never subject to a prior deletion discussion: check if the title contains 2nd, 3rd, etc nomination.
Does that sound about right in terms of automatically verifying prod eligibility? --DannyS712 (talk) 21:37, 23 March 2019 (UTC)
@DannyS712, I would add to #4: check the talk page for history templates indicating prior deletion listings. E.g., it's possible that the previous AfD was under a different article title altogether. (Since those instances would get complicated, would also be helpful for the AfD comment to note if the article was previously live under another title so the closer can manually investigate.) re: #2, I would consider searching edit summaries for either added or removed PRODs or mentions of deletion (as PRODs not added via script may have bad edit summaries). Otherwise this sounds great to me! czar 21:54, 23 March 2019 (UTC)
@Czar: okay, this seems like something I could do, but it would be a while before a bot was up and running. As far as I can tell, the hardest part will be parsing the AfD itself - how to detect if other users have cast a !vote, rather than just commenting, sorting the AfD, etc. Furthermore, since I'm not very original and implement most of my bot tasks via either AWB (not very usable in this case) and javascript, the javascript bot tasks are generally just automatically running a user script on multiple pages. So first, I will be able to have a script that alerts the user if an AfD could be subject to PROD, and then post such notices automatically. The first part is just a normal user script, so it (I think) doesn't need a BRFA, and I'll let you know when I have a working alpha and am ready to start integrating the second part. This will be a while though, so if anyone else wants to tackle this bot request I won't be offended :). Thanks, --DannyS712 (talk) 22:07, 23 March 2019 (UTC)
You should look to see how the AFD counter script counts votes. That aside, the first iteration can always just add the information regardless. --Izno (talk) 00:44, 24 March 2019 (UTC)
This seems vaguely related to this discussion on VPT. --Izno (talk) 00:41, 24 March 2019 (UTC)
Yes Izno, you are correct. I will make a note there that a bot request is the manner being pursued. I think your idea of an edit filter might also be useful. That would ensure the presence of a specific string of text in the edit summary which the bot could search for IAW #2. I agree that simply adding a message to the effect that the subject being discussed either is or is not eligible for soft deletion without relisting would be good for the initial iteration and suggest that it might be best to maintain that as the functional standard indefinitely. I do want to thank the many editors who have stepped up to assist in this effort. I am proud of my affiliation with such a fine lot. Sincerely.--John Cline (talk) 01:52, 24 March 2019 (UTC)

Indian settlements: updating census data

Most articles on settlements in India (eg. Bambolim) still use 2001 census data. They need to be updated to use the 2011 census data. SD0001 (talk) 18:10, 29 March 2019 (UTC)

Is 2011 Census data available on WikiData? Template:Austria metadata Wikidata provides an example template and User:GreenC bot/Job 12 was a recent BRFA to add the template to Austria settlement articles: Example. -- GreenC 19:16, 29 March 2019 (UTC)
I don't think they're there on wikidata. This site does provide the data in what could be considered machine-readable format, though. SD0001 (talk) 16:08, 30 March 2019 (UTC)
Another site is https://www.census2011.co.in If these sites were scraped and converted to CSV, the data could be uploaded to Wikidata via Wikipedia:Uploading metadata to Wikidata. Although this is a big job given the size of India, and the next census is in 2021, when it would be done over again. The number of potential locations must be immense, I went to http://www.censusindia.gov.in/pca/Searchdata.aspx and entered "Hyderabad" and it brought up a list of villages one having a population of 40 people, although which village of "Haiderpur" it is who knows as there are many listed. -- GreenC 17:28, 30 March 2019 (UTC)
The link I've given above already has the data in in Excel format. Ignore the columns part-A ebook and part-B ebook, what we need are the ones under "Town amenities" and "Village amenities". That's two Excel sheets for each of the 35 Indian states and union territories. Some of these files are phenomenally large as you said - Andhra Pradesh contains 27800 villages, for instance. SD0001 (talk) 20:54, 30 March 2019 (UTC)
Ah I see better. Checking Assam "Town Amenities" spreadsheet, for "Goalpara" (line #17), it has a population of 11,617 but our Goalpara says 48,911. If we assume this is for the Goalpara district it is 1,008,959, but in the spreadsheet it only adds up to about 20,000 (line #15-#18). Since most people there speak Goalpariya it seems unlikely there was a sudden population loss due to emigration. Are the spreadsheet numbers in some other counting system, or decimal offset? -- GreenC 22:34, 30 March 2019 (UTC)
GreenC, 11617 is the number of households. Population is 53430, which is reasonable. To get total population of Goalpara district, you need to add up populations in line #15-#25 plus line #2161-#2989 in 'Village amenities' sheet, which roughly gives a figure close to 1,008,959. SD0001 (talk) 23:22, 30 March 2019 (UTC)
Ah thanks again, SD0001! A program to extract and collate the data looks like the next step. I can't do it immediately as I am backlogged with programming projects. Extracting the data and uploading to Wikipedia per Wikipedia:Uploading metadata to Wikidata would be more than half the battle. Also ping User:Underlying lk who made the Wikidata instructions. -- GreenC 00:19, 31 March 2019 (UTC)
It seems like we have 2011 population figures for over 70,000 Wikidata entities, though once we only consider entities with an en.wiki article, it drops to less than 4,000.--eh bien mon prince (talk) 05:15, 31 March 2019 (UTC)
Interesting queries, thanks. Notice some Wikidata entries are referenced some not. Probably the data was loaded by different processes with variable levels of reliability and completeness. I would not be comfortable loading into encyclopedia until it has been checked against a known source and the source field updated. Found Administrative divisions of India helpful to understand the census divisions though the more I look the bigger and more complex it becomes. -- GreenC 14:14, 31 March 2019 (UTC)
@Magnus Manske: This might be Gameable. --Izno (talk) 15:57, 31 March 2019 (UTC)

Credits adapted from

Thousands of articles about music artists, albums and songs reference the source in the body text (example: OnePointFive). Such references belong in a <ref> block at the end of the page and not in the body text. Most of these references follow a common pattern, so I hope this kind of edit can be made by a bot.

I suggest making a bulk replacement from

= =Track listing= = Credits adapted from [[Tidal (service)|Tidal]].<ref name="Tidal">{{cite web|url=https://listen.tidal.com/album/93301143|title=ONEPOINTFIVE / Aminé on TIDAL|publisher=Tidal|accessdate=August 15, 2018}}</ref>

to

= =Track listing<ref name="Tidal">{{cite web|url=https://listen.tidal.com/album/93301143|title=ONEPOINTFIVE / Aminé on TIDAL|publisher=Tidal|accessdate=August 15, 2018}}</ref>= =

Difference sources: Tidal (service), “the album notes”, “the album sleeve”, “the album notes”, “the liner notes of XXX” Different heading names, including “Track listing”, “Personnel”, ”Credits and personnel”. Variants: “Credits adapted from XXX”, “All credits adapted from XXX”, “All personnel credits adapted XXX”

Does this sound feasible/sensible? --C960657 (talk) 17:14, 28 February 2019 (UTC)

References should not be located in section titles. Pretty sure there is a guideline about it, and not good for a couple reasons. The correct way is current, create a line that says "Source: [1]" or something. -- GreenC 17:43, 28 February 2019 (UTC)
Citations should not be placed within, or on the same line as, section headings.WP:CITEFOOT — JJMC89(T·C) 03:38, 1 March 2019 (UTC)
Also (from MOS:HEADINGS): Section headings should: ... Not contain links, especially where only part of a heading is linked. Unless you use pure plain-text parenthetical referencing, refs always generate a link. --Redrose64 🌹 (talk) 12:41, 1 March 2019 (UTC)
You are right. It could not find a guideline on how to place the reference, if it is the source of an entire section/table/list. "Source: [1]" is a good suggestion, perhaps even moved to the last line of the section.--C960657 (talk) 17:25, 1 March 2019 (UTC)
Note, I replaced == with = = above so the bots that update the TOC of this page can function as normal. Headbomb {t · c · p · b} 22:12, 1 April 2019 (UTC)

Population for Spanish municipalities

Adequately sourced population figures for all Spanish municipalities can be deployed by using {{Spain metadata Wikidata}}, as was recently done for Austria. See this diff for an example of the change.--eh bien mon prince (talk) 11:35, 11 April 2019 (UTC)

BRFA filed Well, since my bot for Austria is already written and completed, I might as well do this too. -- GreenC 13:47, 11 April 2019 (UTC)

Category:Pages using deprecated image syntax

Category:Pages using deprecated image syntax has over 89k pages listed, making manually fixing these not possible. Could a bot be created to handle this? --Gonnym (talk) 06:18, 12 April 2019 (UTC)

@Gonnym: I might be able to help, but can you give some examples of the specific edits that would need to be made (ideally with diffs) and how to screen for those? Thanks, --DannyS712 (talk) 06:26, 12 April 2019 (UTC)
Pages in this category use a template that uses Module:InfoboxImage in a {{#invoke:InfoboxImage|InfoboxImage|image={{{image|}}}|size={{{image_size|}}}|sizedefault=frameless|upright={{{image_upright|1}}}|alt={{{alt|}}}}} style that pass to the |image= field an image syntax in the format |image=File:Example.jpg. However, as per usual when dealing with templates, the exact parameters used and their names will differ between the templates. So for example:
  • {{Infobox television}} has {{#invoke:InfoboxImage|InfoboxImage|image={{{image|}}}|size={{{image_size|}}}|sizedefault=frameless|upright={{{image_upright|1.13}}}<!-- 1.13 is the most common size used in TV articles. -->|alt={{{image_alt|{{{alt|}}}}}}}}
  • {{Infobox television season}} has {{#invoke:InfoboxImage|InfoboxImage|image={{{image|}}}|size={{{image_size|{{{imagesize|}}}}}}|sizedefault=frameless|upright={{{image_upright|1}}}|alt={{{image_alt|{{{alt|}}}}}}}}
  • {{Infobox television episode}} has {{#invoke:InfoboxImage|InfoboxImage|image={{{image|}}}|size={{{image_size|}}}|sizedefault=frameless|alt={{{alt|}}}}}

Also, an image isn't the only value that can be passed in |image=File:Example.jpg, but it sometimes is combined with an image size and caption, which will need to be extracted and passed through the correct parameters. --Gonnym (talk) 06:37, 12 April 2019 (UTC)

@Gonnym: okay, now it looks way more complicated. Maybe 1 infobox at a time. Can you provide some diffs for a few different types of cases with an infobox of your choice? Thanks, --DannyS712 (talk) 06:41, 12 April 2019 (UTC)
  • The West Wing (season 3) ({{Infobox television season}}) has image=[[File:West Wing S3 DVD.jpg|250px]]. Instead it should be, |image=West Wing S3 DVD.jpg and |image_size=250px (it can also be without "px" as the module does that automatically).
  • Red Dwarf X has image=[[File:Red Dwarf X logo.jpg|alt=Logo for the tenth series of ''Red Dwarf''|250px]]. Instead it should be, |image=Red Dwarf X logo.jpg, |image_size=250px and |image_alt=Logo for the tenth series of Red Dwarf.
For a better systematic approach though, maybe it would be better finding out what the top faulty templates are, and create a mapping of what parameters the templates use and their names. If the bot can check the template name and know what parameters to use, this should speed things up.--Gonnym (talk) 07:00, 12 April 2019 (UTC)
@Gonnym: And now I'm completely lost. I don't think I'm the right bot op to help with this, sorry. --DannyS712 (talk) 07:02, 12 April 2019 (UTC)
I think someone could start with {{Infobox election}}, which appears to have roughly 11,000 articles in the error category. Here's a sample edit. Basically, for this template, you need to remove the initial brackets and the "File:" part of the image parameter value, then move the pixel specification (which may come in a variety of forms, like "x150px" or "150x150px") to the next line to a new |image#_size= parameter. The number "#" needs to match the image# parameter, e.g. |image2= gets |image2_size=. Drop me a line if this is confusing; I feel like it's a lot to explain in a short paragraph.
This may be a good mini-project to discuss at length at Category talk:Pages using deprecated image syntax. – Jonesey95 (talk) 07:59, 12 April 2019 (UTC)
In many cases, the |image_size=250px (or equivalent) may simply be omitted, because most infoboxes are set up to use a default size where none has been set (example). In my opinion, falling back to the default is preferable since it gives a consistent look between articles. --Redrose64 🌹 (talk) 12:46, 12 April 2019 (UTC)
Mostly true, but unfortunately, that is not the case at {{Infobox election}}, as you can see in this before-and-after comparison. – Jonesey95 (talk) 13:15, 12 April 2019 (UTC)
It appears that Number 57 (talk · contribs) is against the proposal. --Redrose64 🌹 (talk) 13:44, 12 April 2019 (UTC)
I guess I was pinged because of this edit? I don't really understand what is being discussed here, but removing the image size parameters like this edit means that the images in the infobox are different sizes – is this because there is no default size for this infobox, or the default size is for a single dimension (and not all photos have the same aspect ratio)? Can the default size be set to 150x150 (which is the most commonly used size)? Cheers, Number 57 13:52, 12 April 2019 (UTC)
{{Infobox election}} has a default size of 50px for |flag_image=, a 300px for |map_image#= and no default for |image#= which defaults then to frameless (which I'm not sure what it is). If there is a correct size that the template should use, then the template should probably be edited to handle it. --Gonnym (talk) 14:02, 12 April 2019 (UTC)
(edit conflict) @Number 57: If you use the |image1=[[File:Soleiman Eskandari.jpg|150x150px]] format it puts the page into Category:Pages using deprecated image syntax, because the parameter is intended for a bare filename and nothing else, as in |image1=Soleiman Eskandari.jpg. --Redrose64 🌹 (talk) 14:05, 12 April 2019 (UTC)
OK. I have no problem with using some other way to get matching image sizes, but if it is added as a default, it needs to be a two-dimensional, otherwise it ends up in a bit of a mess where images have different aspect ratios. Number 57 14:07, 12 April 2019 (UTC)
Redrose64: your edit, like my edit that I linked above (and self-reverted) resulted in image sizes that look bad. Either the template needs to be modified, or the image sizes need to be preserved in template parameter values within the article, but removing them changes the image rendering in a negative way in that article (and presumably others). – Jonesey95 (talk) 17:00, 12 April 2019 (UTC)

"Accidents and incidents"

All of our articles and categories on transport "accidents and incidents" use that phrasing, as opposed to "incidents and accidents" (which is a line from "You Can Call Me Al"). However, there are a lot of section heads that are "== Incidents and accidents". I would like a bot to search articles for the phrasing "== Incidents and accidents ==" and replace it with "== Accidents and incidents ==". Can that be done?--Mike Selinker (talk) 19:13, 20 April 2019 (UTC)

Why? What difference does it make? I fail to see how a Paul Simon song should influence our choice of phrasing. --Redrose64 🌹 (talk) 22:27, 20 April 2019 (UTC)
That's just a random thing that might be causing some users to think that's the right format. The reason is that every article title and every category title uses the phrasing "Accidents and incidents" (there are hundreds of these). Only some articles' section heads use a different format. It's just about being consistent, which you can either value or not at your discretion.--Mike Selinker (talk) 22:59, 20 April 2019 (UTC)
There are wikilinks like [[Aigle Azur#Incidents and accidents|Aigle Azur Flight 258]] (found in Flight 258) that would break and need fixing. I see somewhere less than 2000 cases overall in section headers (though that search might be improved, a regex version is timing out). Is this something you can do with AWB? It would be better for the person running AWB to be the one with an interest in the change, otherwise the bot operator has to get community consensus etc.. which is involved and time consuming and no guarantee there would be consensus. Could also try Wikipedia:AutoWikiBrowser/Tasks. -- GreenC 14:52, 30 April 2019 (UTC)
@Mike Selinker: since the thread is old. -- GreenC 14:54, 30 April 2019 (UTC)
I can certainly try. Thanks!--Mike Selinker (talk) 14:57, 30 April 2019 (UTC)

Automating new redirect patrolling for uncontroversial redirects

I recently started patrolling newly created redirects and have realized that certain common types of redirects could be approved through an automated process where a bot would just have to parse the target article and carry out some trivial string manipulation to determine if the redirect is appropriate. A working list of such uncontroversial redirects:

  1. Redirects of the form Foo (disambiguation) --> Foo, where Foo is a dab page
  2. Redirects where the only difference is the placement of a year (e.g. Norwich County elections, 1996 --> 1996 Norwich County elections)
  3. Redirects from alternative capitalizations
  4. Redirects between different English spelling standards (e.g. Capitalisation --> Capitalization)
  5. Redirects where the only difference is the use of diacritics
  6. Redirects where the redirect title is included in bold text in the lead of the target, or in a section heading
  7. Redirects from titles with disambiguators to targets of the same name without a disambiguator where the disambiguator content is present in the lead of the target
  8. Redirects from alternative phrasings of names for biographies (e.g. Carol Winifred Giles ––> Carol W. Giles). This would also require the bot to search for possible clashes with other similarly named individuals

Potentially more controversial tasks could include automated RfD nomination for clearly unnecessary redirects, such as redirects with specific patterns of incorrect spacing. I also think it would be a good idea to include an attack filter, so that if a redirect contains profanity or other potentially attackish content the bot will not automatically patrol them even if it appears to meet the above criteria. I anticipate that if this bot were to be implemented, it would cut necessary human work for the redirect backlog by more than half. I've never written a Wikipedia bot before, but I am a software engineer so I anticipate that if people think that this is a good idea I could do a lot of the coding myself, but obviously the idea needs to be workshopped first. There's also potential extensions that could be written, such as detecting common abbreviations or alternate titles (e.g. USSR space program --> Soviet space program, OTAN --> NATO) signed, Rosguill talk 22:17, 28 April 2019 (UTC)

@Rosguill: automated RfD would be a dangerous idea. Also, if they have incorrect spacing R3 may apply. --DannyS712 (talk) 22:18, 28 April 2019 (UTC)
DannyS712, in particular I was thinking of examples where the incorrect spacing is specifically in relation to a disambiguator, which seem to be routinely deleted at RfD. signed, Rosguill talk 22:20, 28 April 2019 (UTC)
@Rosguill: in that case, G6 may apply. Just wanted to point that out, I'm not very active at RfD so I'll defer to you --DannyS712 (talk) 22:21, 28 April 2019 (UTC)
DannyS712 at any rate I'm much more interested in implementing the uncontroversial approvals, which are a much larger portion of created redirects. signed, Rosguill talk 22:25, 28 April 2019 (UTC)

Circling back to this

If there's not going to be any further discussion here, is there anywhere else I should post or things I should do before implementing this bot? The Help:Creating a bot has a flowchart including the steps for writing a specification and making a proposal for the bot, but it's not clear to me which forums I should be using for that (or if the above discussion was sufficient). An additional concern is that while I believe that from a technical perspective this shouldn't be a terribly difficult bot to implement, I would need an admin to give the bot NPP permissions in order to run the bot. signed, Rosguill talk 23:04, 5 May 2019 (UTC)

@Rosguill: if any admin is willing to give DannyS712 test NPP rights, I'll try to work on a user script to easily patrol such pages. Any patrolling I do before any bot approval will be triggered manually, and I agree to be accountable for it in the same manner as when I myself patrol new pages. --DannyS712 (talk) 23:27, 5 May 2019 (UTC)
NPP is thisaway. Primefac (talk) 01:43, 6 May 2019 (UTC)
I'd prefer that you conduct testing and development on testwiki. — JJMC89(T·C) 02:19, 6 May 2019 (UTC)
@JJMC89, Primefac, and Rosguill: I have a working demo set up - see User:DannyS712 test/redirects.json for a list of pages that the bot would automatically patrol. Out of the most recent 3000 unreviewed redirects, it would patrol 96 - those that end in (disambiguation) and point to the same page without (disambiguation), and those that point to other capitalizations/acronyms. I'll note that NO automatic patrols have been performed, just a list has been made. Thoughts? --DannyS712 (talk) 23:37, 12 May 2019 (UTC)
DannyS712, I'm a bit surprised that so few are caught by it, but I guess it's still more useful than nothing and a good starting point in case we want to try implementing some of the other suggested cases. It seems like it's also catching redirects that differ only by the inclusion of diacritics, which you didn't mention in your comment. signed, Rosguill talk 00:21, 13 May 2019 (UTC)
@Rosguill: yes, sorry - I meant accents (or diacritics), not acronyms. --DannyS712 (talk) 00:23, 13 May 2019 (UTC)
BRFA filed --DannyS712 (talk) 21:29, 14 May 2019 (UTC)

Astana ---> Nur-Sultan

Please change all occurrences of "Astana" in all articles to new name "Nur-Sultan". Also please move all articles with "Astana" to "Nur-Sultan". Thanks! --Patriccck (talk) 18:16, 7 May 2019 (UTC)

@Patriccck: The first one would probably fail WP:CONTEXTBOT. Regarding the second one, bots don't normally move pages. --Redrose64 🌹 (talk) 19:33, 7 May 2019 (UTC)

Would like a bot that could search all the articles listed under Category:WikiProject Mountains articles and its children categories that are also listed in Category:Articles with dead external links? Or maybe there's an existing tool that can do this? RedWolf (talk) 21:19, 16 May 2019 (UTC)

@RedWolf:  Doing... with petscan --DannyS712 (talk) 21:24, 16 May 2019 (UTC)
@RedWolf: here is a list of all of the articles listed in the categories. But, petscan isn't loading the intersection with the dead externals, so I don't have the actual result you wanted yet --DannyS712 (talk) 21:38, 16 May 2019 (UTC)
@RedWolf: There are 317 pages: [15] or [16] both generate the list you want, but they take a few minutes to run --DannyS712 (talk) 21:43, 16 May 2019 (UTC)
There we go, petscan, I had run it couple times in the distant past, just couldn't remember the name of it. I'm going to add a sub-page to the WP to hold the results and write up a description of how to re-generate it. Thanks for your help. RedWolf (talk) 22:03, 16 May 2019 (UTC)
I've copied the query results onto Wikipedia:WikiProject Mountains/Articles with dead external links with the URL to re-generate it. RedWolf (talk) 03:09, 17 May 2019 (UTC)
WP:Petscan used to be called WP:Catscan (short for "category scanner"); it was renamed as a play on the word "cat" to "pet" when it started doing more than just basic category scanning and intersections. --Izno (talk) 02:35, 17 May 2019 (UTC)
Ah ok, catscan I remember. :) I was wondering about the "pet" prefix. Thanks for the info. RedWolf (talk) 03:06, 17 May 2019 (UTC)

newenglandwild.org -> nativeplanttrust.org

The New England Wild Flower Society [17] changed its name and web presence to the Native Plant Trust [18]. And in the process broke most of its old URLs. Only insecure http requests to the old web site get an HTTP 301 redirect. https links time out. I suspect a firewall misconfiguration on their end, but I emailed about the problem and it hasn't been fixed.

I am requesting a bot find all the instances of DOMAIN.newenglandwild.org/PATH (http or https) and rewrite to DOMAIN.nativeplanttrust.org/PATH (https only, optionally only if that new URL returns a 2xx or 3xx status code).

I don't have a count of edits to make. Here is a sample page: Vaccinium caesariense. As I write this, reference 2 links to https://gobotany.newenglandwild.org/species/vaccinium/caesariense/ (a timeout error). It should link to https://gobotany.nativeplanttrust.org/species/vaccinium/caesariense/.

Vox Sciurorum (talk) 17:51, 17 May 2019 (UTC)

It looks like this URL pattern appears in only 114 pages in all namespaces, so someone with AWB should be able to make quick work of it. – Jonesey95 (talk) 19:14, 17 May 2019 (UTC)
URLs are difficult. There are archive URLs where the old URL is part of the path and changing the path breaks the archive URL; or where the new link doesn't work and the old link is converted to an archive URL. Or where converting to a new link can replace an existing archive URL. Then making sure {{dead link}} exists if needed. It is quite complex. Everything should be checked and tested. There are bots designed for making URL changes see WP:URLREQ. -- GreenC 21:51, 17 May 2019 (UTC)
I didn't know about that page. I'll repost the request there. This request can be marked closed. Vox Sciurorum (talk) 22:40, 17 May 2019 (UTC)
There is also a template Template:Go Botany (edit | talk | history | links | watch | logs) that could be used specifically for links to gobotany.nativeplanttrust.org. https://gobotany.nativeplanttrust.org/species/vaccinium/caesariense/"Vaccinium caesariense". Go Botany. New England Wildflower Society. Probably not worth the effort to make the bot rewrite the links, though. Vox Sciurorum (talk) 19:55, 17 May 2019 (UTC)
All of the links in main space have been updated, but they are inconsistently formatted and many of them still say "New England Wild Flower Society". I recommend doing a search for that string, or for nativeplanttrust.org and replacing the various citation formats with the {{Go Botany}} template, where appropriate. I didn't do anything to the 24 pages outside of article space, since most of them are sandboxes and maintenance pages. – Jonesey95 (talk) 07:45, 18 May 2019 (UTC)

Unsigned comments

I find myself regularly using the excellent User:Anomie/unsignedhelper.js to document unsigned comments in talk page discussions. This looks like a perfect task for a bot, and I wonder whether there are any reasons it has not been done earlier. Could a kind contributor take up this uncontroversial and useful talk? The process should work similarly to rescuing orphaned references in article space, as performed by User:AnomieBOT. — JFG talk 15:55, 25 May 2019 (UTC)

@JFG: This is one of the normal tasks of SineBot (talk · contribs). --Redrose64 🌹 (talk) 16:17, 25 May 2019 (UTC)
Then why do I find so many unsigned comments to fix, including in older discussions? Am I quicker than the bot, or are there some cases that are not detected properly? — JFG talk 16:26, 25 May 2019 (UTC)
If you look at the bot's contribs (link provided) you'll see that it has been down since 30 April. --Redrose64 🌹 (talk) 16:30, 25 May 2019 (UTC)
JFG, the answer to both of those options is "yes" - the bot doesn't hit all unsigned posts (not only because of defined limits on its editing but also something to do with lag time, server loads, etc, and I honestly don't remember the details), and sometimes it's a bit slow. It also, as mentioned, has been down for a month. Primefac (talk) 19:17, 25 May 2019 (UTC)
That explains it. So let's ask bot maintainer Slakr how s/he can get it back to work. — JFG talk 08:00, 26 May 2019 (UTC)
SineBot is up again. --Redrose64 🌹 (talk) 11:27, 29 May 2019 (UTC)
Yay! — JFG talk 14:08, 3 June 2019 (UTC)

One-off task: move expat userboxes

Hi. Could somebody move all userboxes with the word "expat" in the name from Category:Residence user templates to its subcategory Category:Expat user templates? —⁠andrybak (talk) 08:23, 17 June 2019 (UTC)

Do you know how many there are, as from first look it would be easier to do by hand or AWB than use a bot to do it. I did try in AWB, but couldn't figure out how to find user pages / templates in a category rather than articles. Spike 'em (talk) 15:33, 17 June 2019 (UTC)
Y Done I figured out how to de-filter template space in AWB and have done this. Spike 'em (talk) 16:03, 17 June 2019 (UTC)
Great! Thank you, Spike 'em. It seems that both templates and lists in Wikipedia namespace can be moved that way. Will keep that in mind with regards to possible future bot-requests. —⁠andrybak (talk) 17:11, 17 June 2019 (UTC)

European Challenge bot

Hi, I would like to request for a bot to add the Template:WPEUR10k, to all articles that appears in the list of created articles at Wikipedia:The 2500 Challenge (Nordic) and Wikipedia:The 10,000 Challenge. I think it would be very helpful so all the articles recieved the template tag. I suggest this as there are literally thousands of articles in need of the tag.--BabbaQ (talk) 13:37, 26 April 2019 (UTC)

If there aren't any objections I will do this task, however there might already be a bot operator that can run this task without requesting approval. Kadane (talk) 22:32, 26 April 2019 (UTC)
Thanks. It would be really appreciated.BabbaQ (talk) 23:04, 26 April 2019 (UTC)

Compilation of all handle URLs [Dump/quarry required probaby]

If someone could find all URLs (found across any namespace) that have this pattern in them, that would be great

  • https?:\/\/(.+)\/handle\/.+

That would be great. Sorting the results by domain ($1) would also be even greater. Headbomb {t · c · p · b} 23:22, 24 June 2019 (UTC)

The result can just be a text file or whatever. Headbomb {t · c · p · b} 23:23, 24 June 2019 (UTC)
@Headbomb:  running... database query --DannyS712 (talk) 23:41, 24 June 2019 (UTC)
The query timed out and died, sorry --DannyS712 (talk) 04:14, 25 June 2019 (UTC)
@DannyS712: any way around this? A datadump scan? A quary thing? Headbomb {t · c · p · b} 04:36, 25 June 2019 (UTC)
With a regex search, [19], one can find a list of pages with that pattern. You can combine with a database query or something if you want to find the actual URLs. Or someone with a database dump downloaded can probably do it relatively easily. Galobtter (pingó mió) 04:52, 25 June 2019 (UTC)
@Galobtter: yeah, but I don't want the list of pages with such URLs, I want the list of URLs themselves. Headbomb {t · c · p · b} 04:56, 25 June 2019 (UTC)
Using wikiget search for article names containing the URLs ("wikiget -a insource:/regx/"). Then for each article download the wikisource ("wikiget -w 'article name'") and grep out the URLs (grep -Eo "<regex>"). -- GreenC 05:15, 25 June 2019 (UTC)
That brings PSTD-like memories of StackExchange. Not saying I can't do that, but it could take me anywhere from 10 minutes to 4 years to wrap my head around that, once I even figure out how to install that thing, let alone run it well enough to Hello World something out of it. Headbomb {t · c · p · b} 05:29, 25 June 2019 (UTC)
Done, using a dump. See User talk:Headbomb#External links 'handle'. Johnuniq (talk) 23:42, 25 June 2019 (UTC)

Request for one-time run to tag pages with bare references with Template:Cleanup bare URLs

This is what I see to be a rather uncontroversial request which I have been doing manually for about a month or so now. In order to better identify pages that use bare URL(s) in reference(s) in an effort to get the URLs fixed, I am requesting that the {{Cleanup bare URLs}} tag be added to all pages by a bot which meet the following conditions:

  1. Has at least one instance of a <ref> tag immediately followed by http: and/or https:, followed by any combination of keystrokes and a </ref> closing tag when there are no instances of spaces between the <ref> and </ref> tags (underscores are okay).
    (In such aforementioned instances, the reference tags should not be enclosed inside a citation template.)
  2. There is currently not a transclusion of {{Cleanup bare URLs}} or any of its redirects on that page
  3. The page is in the "(article)" namespace

...From my experiences recently with tagging these pages, tagging the pages with the aforementioned parameters will avoid most, if not all, false positives.

I am requesting this run only once so that it doesn't need constant checks, and this should adequately provide an assessment on how many pages need reference url correction. Steel1943 (talk) 17:54, 22 April 2019 (UTC)

@Derek R Bullamore, MarnetteD, and Meatsgains: Pinging editors who I know either do work on or have worked on correcting pages tagged with {{Cleanup bare URLs}} in the past to make them aware of this discussion, and to see if there are any concerns or issues I'm not seeing at the moment. Steel1943 (talk) 17:58, 22 April 2019 (UTC)
  • Is there a group or individual who cleans up after these tags? Wouldn’t a report suffice? –xenotalk 18:30, 22 April 2019 (UTC)
    • @Xeno: The individuals that I'm aware of are pinged in the aforementioned comment. And unfortunately, a report would not suffice since a report does not place the respective pages in appropriate cleanup categories that the aforementioned editors monitor. In addition, the report may become outdated, whereas in theory, the tags on these pages should not since they tend to get removed once the bare reference urls on the pages are resolved; once the tag gets removed, then, of course, the page gets removed from the appropriate cleanup category. Steel1943 (talk) 18:52, 22 April 2019 (UTC)
  • I don't think this is a good idea. Bare URLs are so common and constantly being added it would tag a significant percent of the entire project. There is also context, like an article with 400 refs and someone adds a single bare URL, a banner would be overkill. If a bot were to do this it should probably search out the egregious cases like an article with > 50% bare citations. Reports can work if you do it right, see this report I recently created. It has documentation, regenerated automatically every X days, linked from other pages, etc.. -- GreenC 19:05, 22 April 2019 (UTC)
Further thought, a report could categorize pages by percentage of bare links so you can better allocate your time on which pages to fix and how. -- GreenC 19:07, 22 April 2019 (UTC)
The original proposal is likely to unearth tens of thousands of articles so affected - given the very small number of editors who work on the {bare URLs} cases, this might generate more problems than that small gang could possibly manage. The latter amendment(s) seem more feasible, but nevertheless we could still "dig up more snakes than we can kill", to borrow an old Texas expression. (This despite the fact that I am from the North of England !) I think a "dummy run" may be better, to get a true perspective of numbers. - Derek R Bullamore (talk) 19:26, 22 April 2019 (UTC)
  • I think that what Derek R Bullamore states may be a good starting point: Before (or in lieu of) a bot performs this task, is it possible for a bot or script to get a count of how many pages fall under the parameters I stated at the beginning of this thread? (I guess this goes somewhat in line with the "report" inquiry Xeno stated above.) Steel1943 (talk) 19:48, 22 April 2019 (UTC)
@MZMcBride: do you still make these kind of reports? –xenotalk 21:52, 22 April 2019 (UTC)
Hi xeno. In my experience, it takes at least a few hours of work to write this type of report and many more hours waiting around for files to download or be scanned. That's a pretty steep investment, so unless it's a report that's particularly interesting or damning or whatever, I'm personally unlikely to want to donate my time. It's possible others have created faster and easier dump scanning tools that could be used here. --MZMcBride (talk) 07:01, 4 June 2019 (UTC)
In previous effort, at least one bot has run which expanded the linked URL to at least include a title. I would guess a BRFA for that effort would succeed. I would see that as greatly preferential to any taggings. --Izno (talk) 00:23, 23 April 2019 (UTC)
Ideally this would be done manually or semi-manually (with assist of tools) as expanding citations is basically impossible to do well fully automated. CitationBot is a start as is refTool (hope those names are right). Those tools took years and they are still not reliable enough to be full auto. We could add a title and call it a day but not ideal. -- GreenC 00:50, 23 April 2019 (UTC)
I echo DRb's post though I would up the guestimate to more than a million articles that would need work. Years ago I tried to put a dent into "External links modified" (example Talk:War and Peace (film series)#External links modified) task and wound up being overwhelmed by the fact that more articles were being added than those that I had checked each time the bot did a new run. Now there was a time when edit-a-thons were arranged around tasks like this but I haven't seen one of those in years. GreenC's idea of limiting it to > 50% bare citations might be a workable solution. MarnetteD|Talk 01:18, 23 April 2019 (UTC)
Agreed - I think we begin with > 50% bare URLs to start and if we can manage to stay on top of those pages, then we can incrementally decrease the percentage. Meatsgains(talk) 02:48, 23 April 2019 (UTC)
  • As I was staring at this discussion thinking of a way to simplify this task, I can up with an idea for a way to update this proposal. How about something along these lines: Rather than being a one-time run bot task, the bot runs at certain intervals (such as once every couple days), and stops tagging pages when the respective cleanup category has a set number of pages tagged maximum (such as 75–100 pages)? This will keep the backlog manageable, but still keep bringing the pages with bare ref url issues to light. Steel1943 (talk) 14:54, 23 April 2019 (UTC)

I believe GreenC could do a fast scan (a little bit offtopic, but could that awk solution work with .bz2 files?). For lvwiki scan, I use such regex (more or less the same conditions as OP asked for) which works pretty well: <ref>\s*\[?\s*(https?:\/\/[^][<>\s"]+)\s*\]?\s*<\/ref>. For actually fixing those URLs, we can use this tool. Can be used both manually and with bot (it has pretty nice API). --Edgars2007 (talk/contribs) 15:36, 23 April 2019 (UTC)

I recently made a bot that looks for articles that need {{unreferenced}} and this is basically the same thing other than a change to the core regex statement, which User:Edgars2007 just helpfully provided. So this could be up and running quickly. It runs on Toolforge and uses the API to download each of 5.5M articles sequentially. The only question is which method: > 50%, or max size of the tracking category, or maybe both (anything over 50% is exempted from the category max size). The mixed method has the advantage of filling up the category with the worst cases foremost and lesser cases will only make it there once the worst cases are fixed. -- GreenC 17:51, 23 April 2019 (UTC)

Fine as far as it goes. I am still concerned about the small band of editors that mend such bare links, being potentially swamped by the sheer number of cases unearthed. Let the opening of the can of worms begin ! - Derek R Bullamore (talk) 19:59, 24 April 2019 (UTC)
Derek R Bullamore, I thought about this some more and think it would be easiest, at least initially, to limit the category to some number of entries (1000) and run weekly while working its way through the 5.5m articles ie. if the first article doesn't have a bare link when it is checked, it won't be checked again until all the others have been checked. If the category is being cleared by editors rapidly it can always be adjusted to run more frequent. -- GreenC 21:29, 24 April 2019 (UTC)
GreenC From my experience 1000 is still a massive number of articles and would take much more than a week to clean up. The situation would quickly become like the example I gave above. Don't forget other editors will still be adding bare url tags to articles so the number will exceed 1000 per 7 days. As far as I know there are only two or three editors who check and work on these regularly. We appreciate having a few days where there are only one or two in the category so we can focus on other editing. I can see a situation where we burnout and abandon the work completely. I would suggest a smaller number like 200 at most. Another possibility is to run 1000 but then not do another run until those have been finished. I probably should have mentioned this earlier but there is a wide range of problems to fix with the bare urls - some are easy and take a few seconds. Others are labor intensive and can require days to finish. For example I am currently working on Ordinance (India). Neither refill or reflinks could format these and I am having to do them one at a time. Now these are just my thoughts and others may feel differently. MarnetteD|Talk 21:52, 24 April 2019 (UTC)

MarnetteD, yes understand what you are saying. Was thinking, what about an 'on demand' system where you can specify when to add more, and how many to add - and it only works if the category is mostly empty, and maxes at 200 (or less). This is more technically challenging as it would require some kind of basic authentication to prevent abuse, but I have an idea how to do it. It would be done all on-Wiki similar to a bot stop page. This gives participants the freedom to fill the queue whenever they are ready, and it could keep a log page. Would that be useful? -- GreenC 19:19, 25 April 2019 (UTC)

That sounds good GreenC. It sure seems to address my concerns. If other editors are adding a batch of bare url tags the bot wouldn't be piling up more on top of those. Thanks for the suggestion. MarnetteD|Talk 20:17, 25 April 2019 (UTC)
Yeah, it looks a good idea - the best so far - particularly if it can be made to operate successfully. Bring it on. - Derek R Bullamore (talk) 22:08, 25 April 2019 (UTC)
Ok this will be new code and I'm finishing some other projects. Will be in touch. -- GreenC 22:52, 26 April 2019 (UTC)
  • @GreenC: Would you be able to send me a ping when this proposed idea has come to fruition? I am not really following this discussion (other than the fact my original proposal was shot down), so I'm just curious how this will look when complete and running. Steel1943 (talk) 21:45, 4 May 2019 (UTC)
@Steel1943: yes no problem. -- GreenC 23:15, 4 May 2019 (UTC)

Y Done -- GreenC 04:56, 25 June 2019 (UTC)

@Steel1943: See ^. If you're OK with this, let me know and I'll archive this thread. Headbomb {t · c · p · b} 08:46, 27 June 2019 (UTC)