Jump to content

Wikipedia:Bot requests/Archive 28

From Wikipedia, the free encyclopedia
Archive 25Archive 26Archive 27Archive 28Archive 29Archive 30Archive 35


Many, many case citations link to case citation. THis was probably done by a bot, and is completely useless as case citation itsself, says to hit back and look on the previous page. If these can be linked to the actual cases, great, but otherwise please get rid of these useless, misleading, links. I have talked to numerous people who absolutely hate these links and want them gone. However, i think they were put ehre by a bot, and only really a bot can remove them all. Scientus (talk) 07:58, 12 May 2009 (UTC)

Yeah, those really are annoying. Of course we don't want to remove all links to case citation; just those that pipe the link, where the shown string starts with a number. Let me look at this.  Doing...Quadell (talk) 12:35, 12 May 2009 (UTC)

Is this the sort of change you're hoping for? – Quadell (talk) 12:59, 12 May 2009 (UTC)
yesScientus (talk) 00:15, 13 May 2009 (UTC)

Considering that our audience is mainly laypersons who may not be familiar with the formatting for legal citation, shouldn't at least one of the citations in an article (perhaps the first one) include the link? Jim Simmons (talk) 15:30, 12 May 2009 (UTC)

I'm not sure that the link will help a layperson to understand what it means. – Quadell (talk) 15:43, 12 May 2009 (UTC)
A link to the opinion is better, which is what we used to have. Could a bot switch to an external link to Findlaw or Justia instead of just removing the internal one? The page and volume numbers would just need to be plugged into the URL, as we do at Template:ussc.--chaser (talk) 15:10, 13 May 2009 (UTC)

I left a note at User talk:Quadell about this. I'd like to see more discussion about a large-scale change like this. At least something more concrete than, "I have talked to numerous people who absolutely hate these links and want them gone." ;-) --MZMcBride (talk) 21:28, 12 May 2009 (UTC)

Sure. To me, it would be like linking all Arabic names who don't otherwise have articles to Arabic name, a la Mohammed bin Abdullah. It looks like a link to something specific, but turns out to be a general (and generally unhelpful) definition of a term you probably weren't looking for. – Quadell (talk) 21:56, 12 May 2009 (UTC)
MZMcBride mentioned on my talkpage that it might be best to have a discussion at VP or something. I'm open, but I don't feel like setting it up. I'm butting the AWB-run on hold, for now... but I'd be grateful if an interested party set up a central discussion somewhere. – Quadell (talk) 22:12, 12 May 2009 (UTC)
Just wanted to say, really good suggestion! It is a little redundant to link all citations to an article on case citations: this is an example of how I write case law pages: Carlill v Carbolic - I link the citation to the actual report on Bailii, which provides the full text of a judgment. For other Commonwealth cases there are similar websites, and for the American cases people often use findlaw.com or something similar. I often also have many queries about whether case pages are properly cited, per WP:CITE, and have to explain that the report citation is the reference. Perhaps someone would like to raise this for a policy change on WP:CITE? But well done for picking this glitch up! Wikidea 00:11, 13 May 2009 (UTC)

The reason I see for why they might have been put there in the first place is to boost the already crazy-high Wikipedia SEO: having specific links that don't point to the context of their name mislead not only people but search engines.Scientus (talk) 00:15, 13 May 2009 (UTC)

I've never liked having the first link in an article point outside of Wikipedia, which is what you have with external links to the full opinion in the article lead. A better practice might be to link to the article on the specific case reporter, which would go a longer way towards explaining the citation than the general case citation article. At this point we should have articles on all the main American ones, such as United States Reports, Federal Reporter, Federal Supplement...even the regional reporters for states, such as Pacific Reporter. Postdlf (talk) 15:39, 13 May 2009 (UTC)

Yeah, I like this better than my external link proposal above.--chaser (talk) 01:55, 15 May 2009 (UTC)
thats not very good. Computers are much better at that, if you have to have a special on-wikipedia page then make it more like Special:BookSources where it actually find the information you are looking for. otherwise it should be removed and as googling it would be much quicker and less confusing to people. In light of the OpenMaps thing maybe Wikimedia could just host this stuff locally with public.resource.org as the collector.Scientus (talk) 17:22, 17 May 2009 (UTC)

The reports were updated with data from the March dump. For the first time we have the full results on the various scans being run. For some we already have identified how these could be fixed by bot (e.g. by AWB or pywikipediabot), for others this still needs to be done. I'd appreciate your help. -- User:Docu

This BOTREQ went unaddressed.

[1]

It still needs to be done, so... I don't know what's the procedure for dealing with that... Help? Headbomb {ταλκκοντριβς – WP Physics} 03:31, 14 May 2009 (UTC)

Not really any "procedure", just unarchive or repost the request. Sometimes the right person doesn't see it, or sometimes the right person is just too busy at the moment to take it on. If I can manage to not fall into the latter category any time soon (and no one beats me to it), I'll take a look into your request myself. Anomie 11:37, 14 May 2009 (UTC)
I started looking at it, and I see a few potential issue. Consider Wikipedia:Selected anniversaries/May 11 from 2008: it incorrectly fails to include the link for Mother's Day, since it depends on {{CURRENTYEAR}}. For the specific case of {{CURRENTYEAR}} I can easily enough take care of it, but if any use is hidden inside a transcluded template it'll be trouble. Similarly, if any old revisions refer to templates that have been changed it'll be difficult to take that into account, and if any of those templates were since deleted I'd have no chance.
I may end up doing a trial run just to dump all templates used for each date since 2004-02-26 (the first date the anniversary page existed for), so I can see just how much of a problem it might be. Anomie 03:27, 15 May 2009 (UTC)
BRFA filed Anomie 21:08, 17 May 2009 (UTC)

Bot to analyze history to determine who posted unsigned posts

Is it possible for a bot to go through the history?

For old archives that have a lot of unsigned posts, it would be nice to have a bot that would go through history and figure out where to use the {{unsigned}} to sign the unsigned posts with name (or IP) and date/time.

Thanks,

--stmrlbs|talk 05:47, 18 May 2009 (UTC)

SineBot has been running since 2007, anything older than that, and I'm not sure I see the point, as the usefulness of archives that old is pretty questionable. Mr.Z-man 06:02, 18 May 2009 (UTC)
I just went through and organized an archive - only one, but a fairly large one, that went back to one entry in 2002. Up until 2007, there were a bunch of unsigned posts, some out of order. But, by going through the history, I was able to figure out who did what. From this, I was able to get the archive in a better chronological order. Which helped make it easier to read.
For any article that isn't real controversial, or doesn't have a whole lot of traffic, you might have an archive covering a lot of time. Having a bot to go through that would save anyone trying to make sense of past discussions a lot of time. But, I realize what you are saying that it wouldn't make sense to run this bot all the time. It would just be a bot that a person could request for an archive that they wanted to read that contained discussions prior to Sinebot.
--stmrlbs|talk 07:03, 18 May 2009 (UTC)

Mass page creation with specific content

Is there a bot that can take a list of titles, check to see which ones aren't already occupied, and then create pages for them with identical prespecified code? That would be really useful for mass de-redlinking of various types of prehistoric animals. Sorry if this is a stupid question, I have no idea how the whole bot thing works here on Wikipedia. Abyssal (talk) 17:14, 19 May 2009 (UTC)

Yes, but does the prespecified code contain all the requisite information for the article or is it generic and needing to be tweaked by a human hand? –xeno talk 17:18, 19 May 2009 (UTC)
It would be a fully functional stub, with the use of the article name in the text being supplied by a template. So, if the subst:pagename template works with bot-created articles it should require no human tweaking whatsoever. Abyssal (talk) 17:29, 19 May 2009 (UTC)
Mass de-redlinking is a pretty bad idea, and the creation of huge numbers of identical pages is also a pretty bad idea. If no one has the inclination to write an article, there's no need to create a stub.—Kww(talk) 17:21, 19 May 2009 (UTC)
Our project goal, the distribution of the sum of all human knowledge, is reason enough to create these articles irrespective to whether any current members currently feel like creating them manually. Abyssal (talk) 17:29, 19 May 2009 (UTC)
Bots have been used for this sort of thing in the past & there is certainly nothing wrong with it if the task/text created is well thought out. --ThaddeusB (talk) 17:54, 19 May 2009 (UTC)
If we were talking populating stubs based on a database, I could begin to see it. Then the conversation would center around whether the database contained sufficient data to create decent stubs from, and whether there was a potential for merging multiple external databases to use as a foundation. But to just create innumerable identical stubs? That just serves to disguise the redlink.—Kww(talk) 17:59, 19 May 2009 (UTC)
Well yes, certainly the created pages need some actual content to justify their existence. --ThaddeusB (talk) 18:03, 19 May 2009 (UTC)
How much content do you guys feel needs to be present to justify the creation of the stub? I was planning on having a complete taxobox, the geologic period it lived, mention that it was a member of its group and is extinct and a breif description of what members of that group were like, a see also section linking to the list of members of its group, a link to the relevant protals and references. Abyssal (talk) 18:12, 19 May 2009 (UTC)
Is Acanthonautilus good enough, assuming I get references in there? Abyssal (talk) 18:40, 19 May 2009 (UTC)
To me, a stub is better than nothing, and this is a fairly good stub. So I would be supportive of creating stubs like this (with a reference/source disclosed, of course). One thing you might be able to add is some examples of species found in the genus - without knowing your source it is hard to say. If you want, I am willing to help you try & expand the amount of info in each stub and/or find ways to make them a little variant. --ThaddeusB (talk) 22:50, 19 May 2009 (UTC)
Listing species, though, would compromise their compatability with mass creation since obviously not every genus would have species of the same name. That kind of information can be added with time whenever someone gets around to expanding them. Any ideas you can come up with would be greatly appreciated. <3 Obviously we should try to polish the make-up of the stub as much as possible before trying to generate them. Abyssal (talk) 00:56, 20 May 2009 (UTC)
(edit conflict) From a technical standpoint, it's something that would be extremely easy for a bot to do. A bot could even handle something like "Here is a CSV file of prehistoric animals with their common names, scientific names, and a bunch of other infobox parameters. Create the articles at the scientific names with redirects from the common names (or vice versa), logging any redirects that couldn't be created for human attention." easily enough. But there would need to be a strong community consensus for the articles to actually be created, especially if they are going to be basically identical stubs.
Another approach (less likely to meet with opposition) is to have the bot create the pages as subpages of the appropriate WikiProject, and then humans can move them to article space once they have been expanded sufficiently. Anomie 18:04, 19 May 2009 (UTC)

Oh, and while we're on the subject, could the bot used for this project, or another bot, be used to add project headers to the talk pages of the created articles? Abyssal (talk) 18:02, 19 May 2009 (UTC)

Newly listed GA articles

Would it be a good idea for us to have a bot take care of the mundane task of updating Wikipedia:Good articles/recent with the titles of recently passed GA noms? This seems like a simple task, and we already have bots to update the articlehistory and {{GA number}}. --ErgoSum88 (talk) 14:50, 10 May 2009 (UTC)

Hm,, I'll have a play around with this one, but if anyone has any good ideas, let me know. - Jarry1250 (t, c) 16:29, 10 May 2009 (UTC)
Okay, nearly done (I'll file for it today at least). It'll grab new additions to WP:GA and then post. How often would you like it updated? - Jarry1250 (t, c) 16:52, 10 May 2009 (UTC)
Well, as often as possible would be good, but I suppose a few times a day would be nice. Thank you so much for this, I'm surprised no one has thought of this before! I've been getting nagged for not updating the "recent" list (I always forget), so now I don't have to worry about it!? Awesome. Thanks again. --ErgoSum|talk|trib 17:27, 10 May 2009 (UTC)
I just noticed another issue. Adding newly passed articles to WP:GA is another human-editing task. Is there any way the bot can, lets say, scan for articles which have recently been assessed as GA-class, and then automatically add the article to both WP:GA and WP:Good articles/recent? It would have to take the category from the GA template in order to know which section to add the article to. --ErgoSum|talk|trib 17:47, 10 May 2009 (UTC)
Well, I don't know about that (this one's WP:GA -> recent) but I've filed a BRFA for your original request. - Jarry1250 (t, c) 18:37, 10 May 2009 (UTC)
Ok, well I appreciate your fast response. I suppose I can post another, separate request for the other issue? As long as the recent articles issue can be handled by a bot, that takes care of half the problem. --ErgoSum|talk|trib 22:24, 10 May 2009 (UTC)
I don't think it's possible for the bot to automatically add articles to WP:GA. A human is required to place the article into appropriate topic. Bot auto-updating WP:Good articles/recent sounds good, but the turnaround rate of GA might cause the bot to change that page too often? OhanaUnitedTalk page 15:40, 13 May 2009 (UTC)
If the proper subtopic tag were placed within the GA template, then all a bot would have to do is use this tag to place the article in the appropriate topic category. The only change necessary would be actually requiring editors to place a subtopic tag into the talk page before passing the GA nom. As far as the other issue of changing the recent articles page too often, is that really a problem? Especially since the turnaround rate for GAs is not that large, a quick check of the article history shows (during recent history) the new additions max out at 15-20 per day, and most days see far less. If the rate increases more than that, then I would suggest changing the recent listing to include more than 15 articles, which would solve the problem of recent articles being pushed off the page before they have a chance to be seen. --ErgoSumtalktrib 14:24, 16 May 2009 (UTC)
Speaking from experience, you'll be surprised to see the turnaround rate spikes whenever a backlog elimination drive takes place. OhanaUnitedTalk page 23:46, 19 May 2009 (UTC)
Good point, but a conversation about this subject is taking place over at Wikipedia talk:Good articles#/recent bot: an update, and we should probably continue this debate there. --ErgoSumtalktrib 02:02, 21 May 2009 (UTC)

WP Films request (tags and redirects)

Is it possible for a bot to generate a list of all articles which are both

  1. transcluding {{Film}} (or any redirects of this template) in their talk space, and also
  2. redirects in their article space?

In other words, can a bot create a list of all redirects which are also tagged in their Talk space with a WP Films project banner?

Many thanks in advance, Girolamo Savonarola (talk) 01:03, 16 May 2009 (UTC)

Yes, it's certainly possible using the m:toolserver. I'll have a play around with it and get back to you. - Jarry1250 (t, c) 11:07, 16 May 2009 (UTC)
 Done This should be the list you're after. If you want, I can turn it into a report that updates itself every time you refresh the page. - Jarry1250 (t, c) 11:49, 16 May 2009 (UTC)
Many thanks! Yes, would it be possible to have the updating report? Most gratefully, Girolamo Savonarola (talk) 16:39, 21 May 2009 (UTC)
Sure thing. Working on it. - Jarry1250 (t, c) 16:44, 21 May 2009 (UTC)
Sorry, I had to eat (apparently). It's at [2], though it is possible that there are bugs. - Jarry1250 (t, c) 18:01, 21 May 2009 (UTC)

Clerk to fiscal officer

All Ohio townships have a section (identical for virtually all townships statewide) detailing their form of government; see the Government section of Madison Township, Richland County, Ohio for an example. You'll see that "...There is also an elected township clerk, who serves a four-year term...Vacancies in the clerkship or on the board of trustees..." is part of this text. The state legislature recently changed the title of this type of official from "clerk" to "fiscal officer"; could someone write a bot to convert "clerk" to "fiscal officer" in these exact strings of text? A few townships have already been updated, but there are well over 1,000 that haven't been updated yet. Nyttend (talk) 01:08, 20 May 2009 (UTC)

Wait, should "clerkship" be changed to "fiscal officership"? – Quadell (talk) 01:38, 20 May 2009 (UTC)
That's what has been done in the few that have been changed. If you think that there's a better way to change it, please do, but it would be helpful if you could also run something to change "fiscal officership" to the term that you're making it. Nyttend (talk) 18:29, 20 May 2009 (UTC)
Also, if it's not too hard, could you add the following reference to these sections — <ref>[http://codes.ohio.gov/orc/503.24 §503.24], [http://codes.ohio.gov/orc/505.01 §505.01], and [http://codes.ohio.gov/orc/507.01 §507.01] of the [[Ohio Revised Code]]. Accessed [[2009-04-30]].</ref>? The only township that I know of that has this section cited is Blendon Township, Franklin County, Ohio. Nyttend (talk) 18:31, 20 May 2009 (UTC)
 DoneQuadell (talk) 16:58, 21 May 2009 (UTC)

Possible for bot to notify creator and contributors of AfD?

Can a bot notify the creator and contributors with x amount of edits of an AfD? This would be done by examining the wikipedia edit history of a page (or the below two webpages):

If this is not possible, can a bot be made to notify the creator of an article of an AfD?

For example, the first editor on a page is found here:

http://en.wikipedia.org/w/index.php?title=Wikipedia:Bot requests&dir=prev&action=history&limit=1

The list of today's afd's is found here: WP:AFDT

Ikip (talk) 00:14, 3 May 2009 (UTC)

I think this used to be done by a bot, but it was stopped due to complaints. Xclamation point 00:46, 3 May 2009 (UTC)
Do you have any other details? I can't find any bot resembling this at Wikipedia:Bots/Status. thanks. Ikip (talk) 05:43, 3 May 2009 (UTC)
I found it: User:Jayden54Bot, Wikipedia:Bots/Requests for approval/Jayden54Bot, User:Jayden54Bot/AFDNotify, opt out: User:Jayden54Bot/ignore.js it was disabled because the editor wanted to take a long wikivacation. I am not aware of any complaints about this bot which shut it down. I emailed the editor about this, hopefully he can give me the coding. Ikip (talk) 06:28, 3 May 2009 (UTC)
It appears like the editor doesn't want to give out coding for this (after repeated requests by other editors, he never gave it out)[3]
So is it possible is someone else would be able to make this? Ikip (talk) 07:50, 3 May 2009 (UTC)
I'll try. Coding... Xclamation point 20:40, 3 May 2009 (UTC)
Thank you, thank you, thank you. I wish editors would simply publish their bot info. Ikip (talk) 00:21, 4 May 2009 (UTC)
(Bump) the editor was unable to do this request, can someone else? Ikip (talk) 18:11, 12 May 2009 (UTC)

Is this related to Wikipedia:Bots/Requests for approval/CSDCheckBot? – Quadell (talk) 12:57, 13 May 2009 (UTC)

no, never heard of it, that bot:
Notifies editors who have tagged a page for speedy deletion that was not deleted or was deleted for criteria differing from that for which they tagged it.
"Notifies editors who have tagged a page for speedy deletion", not the editors who created the page in the first place. could you make this quadell please? Ikip (talk) 08:10, 14 May 2009 (UTC)
If no-one else is working on this I might be able to in the near future. I've actually been running a similar bot on the Dutch Wikipedia for a few years now. Ikip, feel free to bug me if I forget about this. (Note: If anyone else wants to do this then be my guest. Please leave a message at my talk page though.) --Erwin (talk) 20:38, 14 May 2009 (UTC)
Equally, I might have a look at it if there's been no progress in, say, three weeks' time, when my exams finish. [[Sam Korn]] (smoddy) 14:47, 15 May 2009 (UTC)
I checked the Editor's index, found this:
  • User:BJBot – informs major/recent editors of an article that it has been submitted to AfD process
-- John Broughton (♫♫) 01:21, 16 May 2009 (UTC)
Thank you all! I will look into the existing User:BJBot. John, I see where you found this: Wikipedia:Editor's_index_to_Wikipedia. Ikip (talk) 19:59, 18 May 2009 (UTC)
Hmm. The BJBot page doesn't seem to do this currently. This was added to the editor's index on Jan 2008.[4]
This notification feature was removed from the BJbot page in March 2008.[5] Ikip (talk) 20:23, 18 May 2009 (UTC)

I've started work on this. --Erwin (talk) 21:36, 21 May 2009 (UTC)

How can I check if a user has already been notified of the AfD? For example, on the Dutch Wikipedia I check for a link to the page and for some notification templates. I'm not that familiar with this project though. So are there any templates you can use without substitution to notify users? --Erwin (talk) 22:07, 21 May 2009 (UTC)
The two standard templates used to warn people of AFD's are {{AFDNote}} and {{AFDWarningNew}}, but like all talk page messages they should both be substituted and both contain a link to the AFD discussion, so checking for that link should be all that's necessary.--Dycedarg ж 00:56, 22 May 2009 (UTC)
Thanks. --Erwin (talk) 11:05, 22 May 2009 (UTC)

The bot's up for approval at Wikipedia:Bots/Requests for approval/Erwin85Bot 8. --Erwin (talk) 11:05, 22 May 2009 (UTC)

Prince Edward Island on French Wikipedia

Request of 14 May 2009, by : — Sniff (talk) 21:42, 14 May 2009 (UTC)

Request :
List all pages of your category Prince Edward Island in subcategorys and show french articles with him (cf. Exemple) Also, count the number of English articles and French articles. If there are any questions, feel free. Thanks!

It's to translate your articles in French (or just to prepare a list for translators).

Exemple

[6]

I'm guessing that adding ":fr" to the front of the links that (I think) are in the French language Wikipedia will make this request make more sense, and have done so. I also note that the words "English" are links here, though gray; that's somewhat disconcerting. -- John Broughton (♫♫) 01:14, 16 May 2009 (UTC)

This may be an unusual request, but I think it's a good idea. By achieving this exploit, it would be possible to complete all Wikipedia. If you think it is impractical for now, leave me a message and remove this request. Thanks, --Sniff (talk) 20:55, 22 May 2009 (UTC)

lojban bot

I'd like to request a bot for the Lojban wikipedia (English article on Lojban). The bot ought to do one or more of the following tasks:

  • Search for words ended with a consonant within the titles of newly created articles and make some redirects and moves. For example, if the article "djeordj uacinton" is created do the following:
    • Move the article to "djeordj. uacinton." (with a "." after each word ending with a consonant)
    • Create the redirects "djeordj. uacinton", "djeordj uacinton." and "djeordj uacinton" (with and without the period)
  • make all paragraphs in the articles start with "ni'o"
  • make section titles start with "ni'oni'oni'o..." according to the level of the section, for example:
    • If the lowest section level is "====lowest====" make all section titles with that level (====) start with "ni'oni'o", those with (===) start with "ni'oni'oni'o, those with (==) start with "ni'oni'oni'oni'o and so on. For example:
      • For the folowing entry:
text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text 

==third lowest level heading==
text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text 
===second lowest level heading===
text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text 

====lowest level headint====
text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text 
==third lowest level headint==
text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text 

      • make the bot edit it into
ni'o text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text
==ni'oni'oni'oni'o third lowest level heading==
ni'o text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text 
===ni'oni'oni'o second lowest level heading===
ni'o text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text 

====ni'oni'o lowest level headint====
ni'o text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text
==ni'oni'oni'oni'o third lowest level headint==
ni'o text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text 

I know it may not be the most apropiate thing to ask here for a bot for another wikipedia, but didn't find anywhere else to do so.--Homo logos (talk) 15:57, 23 May 2009 (UTC)

I know only a little about Lojban, and I would feel uncomfortable making such large-scale changes to a 'pedia in a language I don't speak without pretty good evidence of consensus. Is this change discussed anywhere (in English)? If not, could you start a discussion somewhere (on Meta, maybe?) for other Lojban enthusiasts to weigh in? – Quadell (talk) 16:05, 23 May 2009 (UTC)
No such consensus exists to my knowledge. However, I think the best place to start such discussion would be the lojban mailing list. I'll start it and post the link here for you to read as soon as I feel some cosensus has been reached--Homo logos (talk) 16:38, 23 May 2009 (UTC)
Thanks! – Quadell (talk) 18:20, 23 May 2009 (UTC)

Tagging uncategorized pages

I propose a task for finding and tagging with the tag {{subst:dated|uncategorized}} pages without any categories. Beagel (talk) 19:14, 23 May 2009 (UTC)

Like this? - Jarry1250 (t, c) 19:23, 23 May 2009 (UTC)
Exactly. Somehow never seen it operating. Beagel (talk) 19:47, 23 May 2009 (UTC)
I could code this. --Chris 13:27, 24 May 2009 (UTC)
There are a number of bots that have been approved for this, but I am not sure if any are active. You might be interested in this Untagged Uncategorized Articles toolserver report. It is a "live" report vs. Special:UncategorizedPages. -- JLaTondre (talk) 13:49, 24 May 2009 (UTC)
If you want I can run a bot like this. I'm running one at the Dutch Wikipedia, so I'd just need to exclude "inuse" from my database query and apply for approval. You would need to explain to me what is meant by inuse though. --Erwin (talk) 20:24, 24 May 2009 (UTC)
BRFA filed at Wikipedia:Bots/Requests for approval/Erik9bot 8. Erik9 (talk) 22:00, 25 May 2009 (UTC)

Bot to repair tables created before Aug 2008, for Accessibility

In Bugzilla:18829 we learn that

"Note: As of August 20, 2008 new tables created by using the Wikipedia table button include border="1" and so they do not have this problem."

however many created previous to that date lack border="1", and will barely readable when cut and pasted outside wikipedia, or by text browser users.

By writing and deploying this bot, you will help Wikipedia better fulfil its Wikipedia:Accessibility#Users_with_limited_CSS.2FJavaScript_support goals of Wikipedia:Accessibility#Tables.

The tables in question, those with class wikitable, are all expected to have borders on them, as that is what is in the stylesheet. So all the author needs to do is check for the class, wikitable, and then double check if there already is a border=... parameter. And then only if not, go ahead and add border="1". (There is no need to also check for "before Aug 2008", as that is not an exact check anyway, as people might have cloned the bad tables from elsewhere later.

Also tables without class wikitable do not have the problem (they look the same, borders or not, stylesheets or not.) so their border= choices should be respected and not tampered with.

Jidanni (talk) 14:13, 18 May 2009 (UTC)

I would like to see a policy/propsal for this type of change over at Village Pump somewhere first before a change/task like this is undertaken. 203.25.140.97 (talk) 23:06, 18 May 2009 (UTC)
As near as I can tell, this only affects people copy & paste the text. If that is actually the case, then I think it would be a waste of resources to mass add border=1 to every wikitable that lack one. I also don't understand why it would only affect tables classed as wikitables - why wouldn't every table without border=1 be affected? --ThaddeusB (talk) 23:47, 18 May 2009 (UTC)
It also effects people using non graphical web browsers (lynx for example) and other possible portable devices in the future, that don't read the css files which is where we style everything these days. 203.25.140.97 (talk) 03:58, 19 May 2009 (UTC)
The chat on the bugzilla page seems to indicate this is not actually the case, although it is possible those people are mistaken. --ThaddeusB (talk) 04:01, 19 May 2009 (UTC)
From my reading of the chat, having border="1" does make a difference for some text browsers. Aside from saying specifically that lynx was not affected, they did not mention any specific browsers that this does in fact affect. Nor did they mention how prevalent said browsers are. I think we need more information on just how many users this is affecting and to what degree before we go making more than a 140,000 edits over this. (I did a rough search to come up with that number, and if anything it's overly conservative.) --Dycedarg ж 04:49, 19 May 2009 (UTC)
They (I) did too mention w3m there in bugzilla. But I'm am hoping you will at least sympathize with the more numerous "cut and paste from Wikipedia" cases, and see the need. Jidanni (talk) 02:40, 27 May 2009 (UTC)
An analysis of server logs was done a few months ago. The stats for browser are at [7]. Text browsers would be included in "non-mobile, other" which make up 0.40% of requests. Mr.Z-man 05:03, 19 May 2009 (UTC)
Why not simply add this to AWB's general fixes? There doesn't appear to be any opposition to the addition in and of itself; it's merely that people don't seem to like the idea of a bot specifically doing it. ダイノガイ?!」(Dinoguy1000) 05:28, 21 May 2009 (UTC)
While we are waiting, I have fixed Help:Table by hand. Jidanni (talk) 07:32, 27 May 2009 (UTC)
This request is for a bot to go through all 6,915,700 articles (or all 61,904,941 pages?) just to add "border=1" to every table with class wikitable? Even working from a database dump to avoid loading all those pages, this seems unnecessary. I second Dinoguy1000's suggestion: just have it added to AWB's general fixes. Anomie 12:07, 27 May 2009 (UTC)

Licensing update bot

Okay, who wants to write and run a bot to retag 1.6 million files?  ;-)

See commons:Commons:License Migration Task Force. Dragons flight (talk) 22:56, 27 May 2009 (UTC)

I replied here: commons:Commons talk:License Migration Task Force#Bot request. --MZMcBride (talk) 23:01, 27 May 2009 (UTC)

AMG has changed their url format. Because of this the link may soon no longer work. So can you search and change the two templates removing the 1: and 2:.

changing them from:

  • {{Amg movie|1:##### |Name}} to {{Amg movie|##### |Name}}
  • {{Amg name |2:##### |Name}} to {{Amg name |##### |Name}}

This would be a GREAT help for us. Thanks. -- Phoenix (talk) 04:31, 19 May 2009 (UTC)

This is not the best solution because the current template will not work with the new format. For a clean transition, I would suggest using a different template. I have set up {{AMG}} to handle all the AMG website links. So the required changes would be:
  • {{Amg movie|1:##### |Title}} to {{AMG|movie|id=#####|label=Title}}
  • {{Amg name |2:##### |Name}} to {{AMG|movie|id=#####|label=Name}}
— Martin (MSGJ · talk) 07:22, 19 May 2009 (UTC)

Insert non-formatted text here

Its just a simple change. Once the bot is finished we just change the url... Its not that complicated. I am very proud of the template I created and how popular it has become. I would love for it to be continued. There are a few services that AMG does and is very well known for their music reviews. The amg name is similar to Template:IMDB name that is for actors. Can the quick change be created to fix these two templates? -- Phoenix (talk) 21:30, 19 May 2009 (UTC)
I'm sure it can be done with the same name for the template... but I don't understand what the "1:" and "2:" are for. And how are the URLs changing? – Quadell (talk) 22:04, 19 May 2009 (UTC)
Thanks :-D The reason that I made the user input 1:###### or 2:###### before was due to the odd url eg. http://www.allmovie.com/cg/avg.dll?p=avg&sql=1:###### I made it so it had to have the full number encase the numbers before the colon changed the whole template wouldn't be broken... It looks like I should have just hard coded it as http://www.allmovie.com/cg/avg.dll?p=avg&sql=1:###### because the new URLs are http://www.allmovie.com/work/###### and http://www.allmovie.com/artist/##### respectively. -- Phoenix (talk) 01:02, 20 May 2009 (UTC)

Currently the movie templates are used like {{Amg movie |1:356351 |Quantum of Solace}}, which creates the link http://www.allmovie.com/cg/avg.dll?p=avg&sql=1:356351 which redirects to http://www.allmovie.com/work/356351. Luckily the link http://www.allmovie.com/work/1:356351 also pulls up the page just fine (at the moment), so if you change the template to link to http://www.allmovie.com/work/{{{1}}} it shouldn't break any links. Then someone (I can do it) could run AWB to remove 1: and 2: from the templates. – Quadell (talk) 01:26, 20 May 2009 (UTC)

Ok Quadell the templates have been changed. You can run AWB whenever you wish. Please let us know when its done so that we can change the templates documentation :-) -- Phoenix (talk) 04:16, 21 May 2009 (UTC)
 Done You can now change the documentation. – Quadell (talk) 18:29, 26 May 2009 (UTC)

Thanks :-) -- Phoenix (talk) 01:14, 28 May 2009 (UTC)

Redirects/disambig from ticker symbols

Some ticker symbols (e.g. JAKK) do not link to the companies they represent. I'd like to see a bot that would ensure that every company that had an article and a ticker symbol had its article linked (either as a redirect or on a disambiguation page) from the ticker symbol. NeonMerlin 05:00, 28 May 2009 (UTC)

Is there a list of these ticker symbols with the appropriate company articles somewhere? "Pages embedding certain templates (which?) in Category:Ticker symbol templates" would work as a list, if we could rely on any article about a publicly traded company using the appropriate template.
A bot could easily enough create the redirect if the page is non-existent (i.e. JAKK), and it could possibly append to the list for a dab page formatted as MMM (although that one is already there), but for a categorized dab like A (disambiguation) it would be hard for the bot to know where to add it (although that too is already there) and thus would probably have to just log them for human attention. Anomie 10:59, 28 May 2009 (UTC)

<ref> tags and punctuation/whitespace

Hi there, I often find myself obsessive-compulsively moving around <ref> tags because I believe they don't comply to what I think I once read are their usage guidelines.

So, if we denote our citation with "[1]", I believe the following applies (please kindly point me to the written and agreed-upon guidelines because I can't find them any more)

1. Ref should go after punctuation

Wrong: Bla bla bla[1].
Correct: Bla bla bla.[1]
Wrong: Bla bla bla[1], but also bla.
Correct: Bla bla bla,[1] but also bla.

2. No whitespace between punctuation and ref, or between ref and ref.

Wrong: Bla bla bla. [1]
Correct: Bla bla bla.[1]
Wrong: Bla bla bla.[1] [2]
Correct: Bla bla bla.[1][2]

Would this be something that could be done by a bot? Or does it fall into the "cosmetics" class, which I understand is not bottable? If this is possible and considered useful, please let me know. I could have a go and do it myself. Thank you. 114.150.83.231 (talk) 18:49, 28 May 2009 (UTC)

No, a bot can't do this. There are too many corner cases to do this automatically. I have a script which works fairly well, but the edits need to be previewed because it screws up occasionally. Unfortunately, I think you would need an account to run a script. Gimmetrow 18:52, 28 May 2009 (UTC)
Do you think that a bot could reliably remove spaces from in front of reference tags, but without changing punctuation? That would probably have far fewer false positives and it would also be super-easy to code. –Drilnoth (T • C • L) 18:55, 28 May 2009 (UTC)
If an article already has a consistent style with spaces between punctuation and ref tag, then it should usually be left alone. I don't think a bot (ie, automatic) is a good idea here. I personally think it's better that editors use a script and take responsibility for checking the edits. Gimmetrow 19:06, 28 May 2009 (UTC)

Thanks. Could you please give me examples of the possible false positives you have in mind, so I can understand? Cheers. 114.150.83.231 (talk) 18:59, 28 May 2009 (UTC)

Some examples are given on the talk page of the script. Most situations I notice involve html comments and ellipses. It's also possible that line breaks can interfere. Gimmetrow 19:06, 28 May 2009 (UTC)
Sorry to be thick, but where is the script you are referring to? 114.150.83.231 (talk) 19:09, 28 May 2009 (UTC)
[8] Gimmetrow 19:11, 28 May 2009 (UTC)

I think at least automatically eating any whitespace to the left of any reference tag (item 2 above) should be fairly safe, no? 114.150.83.231 (talk) 19:03, 28 May 2009 (UTC)

Not if an article already has a consistent style with spaces between punctuation and ref tag. Gimmetrow 19:11, 28 May 2009 (UTC)
"If an article already has a consistent style with spaces between punctuation and ref tag, then it should usually be left alone." - I actually disagree to this. This is an area where a consensus should be reached if there isn't one already, and all articles should stick to it, leaving WP as a whole looking more consistent and professional. 114.150.83.231 (talk) 19:11, 28 May 2009 (UTC)
See WP:REFPUNC. And even if there were a consensus, it would be a cleanup thing that isn't worth having a bot running around to do only that. Anomie 19:53, 28 May 2009 (UTC)

Thanks. It looks to me like there is a clear consensus for point 2 above (spaces). I'm new to bots. Could you kindly summarize (or point me to relevant discussion/policy) for the rationale behind "it would be a cleanup thing that isn't worth having a bot"? Thank you. 114.150.72.41 (talk) 00:26, 29 May 2009 (UTC)

Making large numbers of edits for the sole purpose of the addition or removal of whitespace is strongly discouraged, as it increases server load with little resulting benefit. Erik9 (talk) 00:52, 29 May 2009 (UTC)
Agreed. With all due respect, this bot does not seem very helpful. Having ref tags in different places is not a serious problem, and will result in a large number of edits that are, frankly, unnecessary. The Earwig (Talk | Editor review) 00:58, 29 May 2009 (UTC)

I'm not sure I understand. It's bad to have a bot do this, but nobody would object to users wasting their time effecting the same number of edits manually fixing the refs in compliance with agreed-upon guidelines. If server load is the problem, can we not throttle the bot to whatever level is perceived necessary? 114.150.72.41 (talk) 02:27, 29 May 2009 (UTC) ...and as for the usefulness of such a bot, I guess it's subjective. To me, it would be helping making Wikipedia look more consistent and professional. 114.150.72.41 (talk) 02:30, 29 May 2009 (UTC)

I think that AutoWikiBrowser has something that deals with references and whitespaces, but I'm not sure. It would be worth checking with the maintainers of AWB what the status is.Headbomb {ταλκκοντριβς – WP Physics} 04:11, 29 May 2009 (UTC)

Need abot to modify some templates

I need a bot to go through the articles in Category:Mixed Drinks articles by quality and change any banner with the |focus=bar switch so that the banner will contain the |bar=yes switch.

eg {{WikiProject Mixed Drinks|focus=bar}}{{WikiProject Mixed Drinks|bar=yes}}

The total number of articles is around three hundred (300) and about 30% have the |focus=bar switch

Thank you for your time, --Jeremy (blah blah) 02:31, 25 May 2009 (UTC)

Possible Possible. 100 changes? Looks like an easier job for someone with AutoWikiBrowser than a bot, but it certainly is doable both ways. Contact me if AutoWikiBrowser can't be used, and I'd be happy to make a bot for it. The Earwig (Talk | Editor review) 03:05, 25 May 2009 (UTC)

I'd love to do this using AWB, I even downloaded it but cannot figure out how to use it. --Jeremy (blah blah) 06:18, 25 May 2009 (UTC)

Hi there, my bot Thehelpfulbot has been approved to do find/replace tasks like this one. I can do this task if you like, after my bot has finished running the task that it is currently on. The Helpful One 14:17, 25 May 2009 (UTC)

Thank you, that would be greatly appreciated. --Jeremy (blah blah) 14:30, 25 May 2009 (UTC)

{{Doing}} - I ran a filter through the list, so it's going through all the talk pages in the Category - a total of 298. As you say there are only about 100 find/replaces to be done, it should go through the list and complete the task soon! :) The Helpful One 14:58, 25 May 2009 (UTC)
 Done! Hmm, that was certainly odd, the bot went through the entire list but only made 6 find/replace edits. I think that this might mean that someone else might have already ran through the list? The Helpful One 15:05, 25 May 2009 (UTC)
Or it simply could be an over-estimation (although unlikely). I didn't know that we approved bots for generic find/replace tasks— I thought we only approved them for specific find/replace tasks. Gee, if I had known about this with EarwigBot I's 3rd task... The Earwig (Talk | Editor review) 15:18, 25 May 2009 (UTC)

Could some one tag all of the articles under Category:Bartending with WPMIX |bar=yes? The reason this was the problem is that not all articles have the |bar= switch on them. --Jeremy (blah blah) 04:48, 27 May 2009 (UTC)

 Done I checked using AWB, and none of the pages under the category were missing the |bar parameter. MacMedtalkstalk 02:53, 30 May 2009 (UTC)

I thank you sir. --Jeremy (blah blah) 04:07, 30 May 2009 (UTC)

Stub tagging for WP:Food

This is a large request and I would like to know if this can be done easily...

I would like to have a bot to go through the Category:Unassessed Food and drink articles and assess the stubs. The problem is there are 10,322 unassessed articles and I am afraid this would bog the system down.

Could this be done without disturbing the system?

--Jeremy (blah blah) 02:46, 25 May 2009 (UTC)

Possible Possible. If I remember correctly, this is something BetacommandBot used to do before it and its owner were indef-blocked. Like the above task, it doesn't look that hard. Probably a little more complex, but still very doable. The Earwig (Talk | Editor review) 03:21, 25 May 2009 (UTC)
Easy, really. Besides assessing stubs as "stub", it's also trivial to assess redirects (if any) as "redirect" and disambiguation pages as "disambig". Slightly more complicated, but still easy, is to also copy the highest or lowest rating from any other projects' banners on the talk page. Several bots can do this; AnomieBOT is busy with another assessment run at the moment, but if no one else has done this by the time that's done then I can take care of it. Anomie 04:37, 25 May 2009 (UTC)

Thank you, if that would not be a bother, it would be appreciated. --Jeremy (blah blah) 04:44, 25 May 2009 (UTC)

Sambot has approval for exactly this kind of task. I'll do a few edits for you to check over first. [[Sam Korn]] (smoddy) 15:23, 25 May 2009 (UTC)
See what you think of these edits. [[Sam Korn]] (smoddy) 15:43, 25 May 2009 (UTC)

Those are very good, most are dead on there were a couple that were rated as Start that could be a C. The difference I use is the amount of content in the article. I would say go ahead and do that for us.--Jeremy (blah blah) 00:59, 26 May 2009 (UTC)

Doing... [[Sam Korn]] (smoddy) 18:16, 26 May 2009 (UTC)
Y Done About 7,200 articles tagged; 3046 left untagged. [[Sam Korn]] (smoddy) 11:09, 28 May 2009 (UTC)

Thank you very much sir! --Jeremy (blah blah) 04:06, 30 May 2009 (UTC)

Abuse filter reporter

We need a bot to report cases of certain abuse filters being tripped. I watch several AFs where the only activity is by easy-to-spot long-term-abuse socks. But if I am not actively watching them and do not have abuse IRC open, then the socks just keep trying and trying until they find a way around the filter. If a bot monitored such AFs and immediately reported them to WP:AIV/TB2 - which is watchlisted by many admins - the socks would be blocked very quickly. Recommend the bot operator have sole discretion over which AFs are reported immediately, taking discussion/consensus into account in contentious cases. As a bonus, maybe other AFs are reported only if tripped a certain number of times - as happens in the abuse IRC. Thanks! Wknight94 talk 18:32, 30 May 2009 (UTC)

Coding... Mr.Z-man 20:47, 30 May 2009 (UTC)
BRFA filed. Mr.Z-man 20:02, 31 May 2009 (UTC)

List of articles

List all articles alphabetically (no doublon please) of the category and subcategories Prince Edward Island here. Thanks! —Sniff (talk) 14:05, 31 May 2009 (UTC)

Done I think. - Jarry1250 (t, c) 14:26, 31 May 2009 (UTC)
Thank you, but can you to do again this request ? A bad category is include, Category:Aboriginal peoples in Atlantic Canada, in first list. It's my error, sorry... Best regards, --Sniff (talk) 19:15, 31 May 2009 (UTC)
But that's a subcategory of Prince Edward Island. It's only a few articles, can't you remove them manually? - Jarry1250 (t, c) 19
18, 31 May 2009 (UTC)
No, it's that category and those subcategories, it concerns + 500 articles. Thanks, --Sniff (talk) 20:07, 31 May 2009 (UTC)
Finally, don't include these categories and subcategories : Category:Aboriginal peoples in Atlantic Canada, Category:Images of Prince Edward Island, Category:Prince Edward Island geography stubs and Category:Prince Edward Island stubs. Thanks you! --Sniff (talk) 01:21, 1 June 2009 (UTC)
This this what you wanted? --Chris 02:32, 1 June 2009 (UTC)

Longtalk Template

I'm requesting for a bot to stick the {{Longtalk}} template onto talkpages that are indeed excessively long. This should be a very very simple bot to code.=D Smallman12q (talk) 18:15, 31 May 2009 (UTC)

How long is an excessively long piece of string? - Jarry1250 (t, c) 18:29, 31 May 2009 (UTC)
Well the template says 150kb. Cute=D. Smallman12q (talk) 19:44, 31 May 2009 (UTC)
Can't simply a list be created stating which talk pages are too large? Then if people want, they can clean up. To have this tag on the talk page seems overkill. Garion96 (talk) 01:29, 1 June 2009 (UTC)
I concur. Doesn't it make more sense to spend your energy on archiving the talk page, rather than writing a bot to tag it? The Earwig (Talk | Editor review) 02:35, 1 June 2009 (UTC)
See Wikipedia:Database reports/Long pages. That is listing of all non-subpages exceeding 175kb. Click on the symbol next to the Page column heading to sort by page name and you'll easily find all the Talk: pages. -- JLaTondre (talk) 03:13, 1 June 2009 (UTC)
here's a list with just the talkpages. --Chris 03:32, 1 June 2009 (UTC)
Thanks. I looked on the toolserver.org's project listings, but didn't see that. Could you make them links? That would be more convenient. -- JLaTondre (talk) 03:35, 1 June 2009 (UTC)
done --Chris 06:44, 1 June 2009 (UTC)

Message delivery regarding {{NoMultiLicense}}

Will one of the message delivery bots please deliver the message discussed at WP:VPP#NoMultiLicense template to the talk pages of each user using {{NoMultiLicense}}, in preparation for the Wikimedia relicensing? There seems to be about 127 of them. Thanks. Anomie 01:57, 2 June 2009 (UTC)

 Doing... The Helpful One 20:16, 2 June 2009 (UTC)
 Done! The Helpful One 20:40, 2 June 2009 (UTC)

List of articles tagged for both WikiProject Films and WikiProject Biography

Hello! I was wondering if it might be possible to see a list of articles which are tagged for both WikiProject Films and WikiProject Biography, since the former's scope does not include biographies, but the project has a tough time tracking mistagged articles. A one-time generated list is fine; something on the toolserver which dynamically updates would be even better! :) Many thanks in advance, Girolamo Savonarola (talk) 18:03, 2 June 2009 (UTC)

Did you get my reply on your talk page Girolamo? I was hoping that that would have worked out for you. - Jarry1250 (t, c) 18:07, 2 June 2009 (UTC)
I did; unfortunately it did not...the page's status is also listed as SNAFU, which might account for the failure. Girolamo Savonarola (talk) 18:08, 2 June 2009 (UTC)
SNAFU = Situation Normal: [...]. What combination did you try? - Jarry1250 (t, c) 18:10, 2 June 2009 (UTC)
Huh. Film articles by quality and Biography articles by quality, each to a depth of 10. I'm re-attempting now with a depth of 5 to see if that helps. Girolamo Savonarola (talk) 18:14, 2 June 2009 (UTC)
Nope, not working. These are two rather large categories - it may simply be asking too much of the application. Girolamo Savonarola (talk) 19:48, 2 June 2009 (UTC)
Quite possible. In the mean time, I'll see if I can knock up a one-time report. That might take a while as well though (and probably won't include redirects of the template for now either). - Jarry1250 (t, c) 19:56, 2 June 2009 (UTC)
I have to go now really, but I've start the query that when finished will leave a nice text file of page titles at http://toolserver.org/~jarry/Film%20bios.txt . If you check that later, you will probably find a list at that web adress. It's only going to be the intersection of {{Film}} and {{WPBiography}}, but that's a start I think. Hope it helps, - Jarry1250 (t, c) 20:03, 2 June 2009 (UTC)
Thank you! That's all I actually want - a list of talk pages tagged by both of those templates (or redirects of the templates). If you have time down the line to make a dynamic version, I'd be very grateful, too! :) Thanks again, Girolamo Savonarola (talk) 20:29, 2 June 2009 (UTC)
Sure, speed coding it now (with redirects to the templates). (I managed to rearrange some RL stuff.) - Jarry1250 (t, c) 20:32, 2 June 2009 (UTC)
 Done http://toolserver.org/~jarry/filmbios/ . Haven't tested it yet though. Enjoy! - Jarry1250 (t, c) 20:38, 2 June 2009 (UTC)
Awesome! Many thanks again, Girolamo Savonarola (talk) 21:08, 2 June 2009 (UTC)

Project page tagging

Could you tag the talk pages of these portals and their subpages as follows:

Thank you for your time, --Jeremy (blah blah) 06:40, 1 June 2009 (UTC)

Hi there, just to confirm, you want to for me to go through Category:Drink Portal for example, convert the links to talk pages (in AWB), then add that template to Portal:Drink/Selected_article/26 for example - i.e. I will be creating every page that is red linked? Are any of the pages already tagged with that template, as then I will need to use an extra option in AWB (no big deal if needed!). Thanks, The Helpful One 17:02, 1 June 2009 (UTC)

Sorry about not being more specific; here is what I would like to be done:

  • Go through the pages and, if they are not categorized, add the cat. If it is a sub page, pleas add the category along with a <noinclude></noinclude>; eg <noinclude>[[Category:Food portal|]]</noinclude>
  • On the talk pages of these articles, please add the template to pages that have not been tagged as being part of the portal;
  • If there are pages that have not been created yet, you can create them by adding the cat to the page.
  • Please include all subpages of the main portal pages in this tagging run

Is that a little more helpful? --Jeremy (blah blah) 21:58, 1 June 2009 (UTC)

Hi there, yes I think it is. Just to confirm, for Portal:Drink for example, go through the pages relating to the Drink Portal, located here. Then go through the pages. For this example I will use Portal:Drink/Intro. I'm wondering how you classify a "subpage" as that page has the category without any noinclude tags, yet Portal:Drink/Things you can do for example has the no include tags? Then for the pages that haven't been created, I presume that I create the page by adding the cat (with / without noinclude tags) and then add the relevant template to the talk page? Thanks, The Helpful One 22:25, 1 June 2009 (UTC)

A few more answers:

  • Pretty much all of the subpages are transcluded and should have the <noinclude></noinclude> included in the category. If they do not have it, please add it.
  • The subpages are anything below the main page, eg Portal Foo/Selected bar, Portal Foo/Selected bar/Selected bar 1, Portal Foo/Selected bar/Selected bar 2...
  • Any subpage you create, please include the <noinclude></noinclude> when categorizing it.
  • Yes, please add the project tag to the talk page of any subpage you create.

Is there any other questions you need me to answer? I want to make this as smooth as possible for you and will gladly help. --Jeremy (blah blah) 23:18, 1 June 2009 (UTC)

 Doing... That explanation did the trick, I'm running the task on AWB at the moment, you'll have to bear with me for a while, as the process is a slow, but mostly automated one! The Helpful One 17:32, 2 June 2009 (UTC)
This is  Done for Portals Drink, Food, Wine and Beer. I still have the other 3 left to do. The Helpful One 19:34, 2 June 2009 (UTC)
This entire task is now  Done - if there are any problems, or the bot missed out pages, feel free to leave a message on my talk page and I will see what I can do. Hope this helps, The Helpful One 20:07, 2 June 2009 (UTC)

Thank you --Jeremy (blah blah) 05:09, 3 June 2009 (UTC)

DC Meetup delivery request

Would a bot owner deliver the message at Wikipedia talk:Meetup/DC 7#Final announcement? (That page lists the three user categories for whom the message delivery is requested.)

Thanks!

-- John Broughton (♫♫) 15:59, 2 June 2009 (UTC)

 Doing... The Helpful One 19:32, 2 June 2009 (UTC)
 Done all 430ish pages. The Helpful One 20:47, 2 June 2009 (UTC)
Excellent! Thank you very much. -- John Broughton (♫♫) 21:34, 3 June 2009 (UTC)

Unreferenced football BLPs

A bot is needed to update Wikipedia:WikiProject Football/Unreferenced BLPs/Sorted by country by computing the intersections of each subcategory of Category:Football (soccer) players by nationality with Category:All unreferenced BLPs, per the discussion on my talk page. Though I'm a bot operator myself, this is beyond what I can easily handle. Erik9 (talk) 23:51, 28 May 2009 (UTC)

Just wanted to bump this request. A bot updating this page would free time to source articles. Rettetast (talk) 02:19, 4 June 2009 (UTC)

OTRS pending bot for Commons

We need a bot to check subcategories of commons:Category:OTRS pending (the ones that are older than 30 days) and notify the uploaders of those files (using a template) and also replace the {{OTRS pending}} tag on the file page with {{No OTRS permission since}}. There are further details here. Feel free to ask any questions here, there, or on either of my talk pages. - Rjd0060 (talk) 23:38, 2 June 2009 (UTC)

This doesn't seem too substantially different from the bot I'm currently running; I don't think it would take too long to code. I guess the important questions are:
  • Is this bot running entirely on Commons, i.e., are notifications being made to user's commons talk pages only? (and if so, why are you asking here? ;-) )
  • Where would I go to get a bot approved on Commons?
  • Are there any alternate spellings of "OTRS pending" that the bot could need to be aware of? (i.e. "OTRSPending", etc., don't worry about case sensitivity)
I'll start working on a basic framework for it. Hersfold (t/a/c) 17:07, 3 June 2009 (UTC)
More questions (possibly more to be added):
Hi. Thanks. The bot should be for Commons only, and I'm asking here because there are actually people around here, and not so much on commons. Bots can be approved at commons:COM:BRFA. To answer your other questions:
There are several alternate spellings. All of which can be seen here.
As for images with {{OTRS received}}, I would just ignore those for the time being. They probably need to be reviewed manually.
Now my question: Does the template that the bot leaves for the uploader need to be in any special format? I haven't created it yet. - Rjd0060 (talk) 17:31, 3 June 2009 (UTC)
As long as it has clear documentation, it should be ok - preferably something like {{No OTRS permission since|YYYY|MM|DD}} so the bot can handle the dates easily, but whatever you sort out I should be able to work with. Hersfold (t/a/c) 18:44, 3 June 2009 (UTC)
Wait, misunderstood. The one for the uploader shouldn't need any special parameters, I don't think; don't include a signature, I can have the bot add one itself, but do allow the template to be subst'ed. Hersfold (t/a/c) 18:59, 3 June 2009 (UTC)
Try something like: {{subst:No OTRS Notice|Filename.ext|YYYY|MM|DD}} ~~~~, where the date is today + 15 days. Hersfold (t/a/c) 19:01, 3 June 2009 (UTC)
I wasn't going to include a date in this. Does commons:Template:Noticket look workable? - Rjd0060 (talk) 22:36, 3 June 2009 (UTC)
Another question: should I allow all images w/o timestamps to be reviewed manually, or try to have the bot determine an upload date? The latter could prove very difficult and is likely to be unreliable; pulling the date from the description box won't always work, such as for this image just uploaded today. Hersfold (t/a/c) 19:45, 3 June 2009 (UTC)
I would have it ignore anything without a date. - Rjd0060 (talk) 22:36, 3 June 2009 (UTC)
For the notification template, if you could make the header optional, that'd be good too. Thanks. Hersfold (t/a/c) 23:22, 3 June 2009 (UTC)
Do you mean I should add a header? - Rjd0060 (talk) 23:54, 3 June 2009 (UTC)
If you add one, it should be optional. You don't have to though; it's easier the way the bot framework is set up to just have the bot make the header. A prototype should be available shortly, by the way. Hersfold (t/a/c) 14:57, 4 June 2009 (UTC)
Missed your note earlier - Template:Noticket looks just fine. Thanks! I'll have the bot set up to run on test wiki soon. Hersfold (t/a/c) 15:04, 4 June 2009 (UTC)
I wasn't going to use a heading on the template itself. By the way, here is an updated plan. You'll have to make sure the bot, when it replaces the OTRS pending template with the No OTRS permission since template that it uses the correct date format. For example, the bot should remove {{OTRS pending|year=2008|month=December|day=16}} and replace it with {{No OTRS permission since|day=4|month=June|year=2009}} (or whatever todays date is). Sorry for the confusion. - Rjd0060 (talk) 17:44, 4 June 2009 (UTC)
(outdent) We talked about this on IRC, but for the record, it shouldn't matter. I'm using {{subst:nopd}} to generate that template, which handles the dates for me.
Testing is underway at testwiki. Hersfold (t/a/c) 17:57, 4 June 2009 (UTC)

New bot for Portuguese Wikipedia

Hello people! I'm here to tell the English Wikipedia's community about import a bot to the Portuguese Wikipedia. The bot is the CorenSearchBot (talk · contribs · count), that is operated by Coren (talk · contribs · count), and the bot search for Violation of Copyright in articles. The user HyperBroad was suggested it there, and the suggestion was aproveted (link here). But, i talked with Coren and he said me that he don't have time and he don't speak Portuguese to operate it. So, he said to me write here to ask: Any body wants to operate it there? Coren will help and divugate the source code if any one wants. Vitorbraziledit (talk) 02:59, 4 June 2009 (UTC)

You could have a look around users in this category - Kingpin13 (talk) 09:56, 4 June 2009 (UTC)
I'm looking for an user of English Wikipedia, that know the language of progamation to do it. Vitorbraziledit (talk) 15:12, 4 June 2009 (UTC)
Any body? Vitorbraziledit (talk) 23:57, 4 June 2009 (UTC)

Journal compilation

It would be nice if a bot retrieved the |journal= parameter from {{citation}} and {{cite journal}} (probably using data dumps) and built a list of journals and journal abbreviations with the number of times they are found. This would be useful for Wikipedia:WikiProject Journals, so they could assess what are the high-priority missing journals, redirect to main articles, etc...

The list should be alphabetically ordered, with entries linked. Redirects should be italicized. Place the list at Wikipedia:WikiProject Academic Journals/Bot compilation/X1, where X is the appropriate letter. If articles start with The X, then classify according to X. A 500 entries per page limit would be a good idea (then go to X2, X3...). After this is done, any redlink with a count of over 10 hits (1 citation = 1 hit) should be placed at Wikipedia:WikiProject Academic Journals/Bot compilation/Missing articles and redirects/1. Again a 500 entries per page limit would be a good idea (then go to /2, /3, ...). Headbomb {ταλκκοντριβς – WP Physics} 03:10, 30 May 2009 (UTC)

The bot would be run every time there is a dump. Headbomb {ταλκκοντριβς – WP Physics} 03:13, 30 May 2009 (UTC)
Hello, I would be glad to take this on if you can wait a few days before I start on it. --ThaddeusB (talk) 17:57, 31 May 2009 (UTC)
There's no rush. If there's anything unclear, just ask. Headbomb {ταλκκοντριβς – WP Physics} 23:38, 31 May 2009 (UTC)
This would be fantastic, however it would be just as useful for a once off list to be compiled of only the top 500. That will likely give us a lot of cleanup ideas so that a second pass will be more beneficial.
Also useful would be a list of |journal= values where the value is an abbreviation (i.e. containing periods) and where that abbreviation either a) doesnt exist as a redirect or b) does not exist in the |abbreviation field of any article containing the Infobox Journal template. Again just a list of the top 500 would be sufficient to keep us busy for years.
John Vandenberg (chat) 11:49, 2 June 2009 (UTC)
Yes I thought of that, but the thing is that often abbreviations are not dotted, written differently etc... (for example I've seen Nucl. Instr. Meth. Phys. Res., Nucl. Instr. and Meth. in Phys. Res., Nucl Instr & Meth Phys Res, and other variants). Having the full list of inputs ordered alphabetically is not the most usefull thing in the world as far as identifying high-priority article, but its usefull for finding stuff about a particular journal (usefull for me at least, since I want to work on physics-related journals mostly). There would also be the inputs ordered by hits (which is the one that seems to interest you), and I don't think the first 500 hits will last until the next dump because there would be a great number of redirects and abbreviations in it. At worst, all you'd have to do is to restrict yourself to the first page of that list :P. Headbomb {ταλκκοντριβς – WP Physics} 15:07, 2 June 2009 (UTC)

(unindent) Oh, and to the bot coder, if you could also place existing articles & redirect in bold (in addition to what I just wrote above), it would make it much easier to keep track of progress. Here's an example of what the end product should look like in case I'm not very clear.

Alphabetical By hits

Thanks.Headbomb {ταλκκοντριβς – WP Physics} 15:07, 2 June 2009 (UTC)

A Request for approval has been filed --ThaddeusB (talk) 15:44, 8 June 2009 (UTC)

Tagging and categorizing for WikiProject Elements (somewhat complex)

Currently, WP Elements and the inactive WP Isotopes are using multiple banners with various parameters used in a more or less consistent manner to achieve various things. The request is sort of "three-step", although implementation doesn't have to be. These tables might prove useful.

Tagging
  • All 118 elements should be tagged with (or have {{Chemical Element}} converted to the new template) {{WP Elements}} and {{V0.5}}. The parameters |class= and |importance= should be imported into {{WP Elements}} and be written if not present. The parameter |peer-review=yes should be imported if present, but otherwise should be removed.
  • All existing pages of the form Elementname-X, where X is a number, should be tagged with (or have {{Chemical Element}}/{{Chemical Element Isotope}} converted to the new template) {{WP Elements|isotope=yes}}. The parameters |class= and |importance= should be imported when possible, and be written if not present. The parameter |peer-review=yes should be imported if present, but otherwise should be removed. If the pages are redirects, they should be tagged with |class=redirect.
  • All existing pages of the form Isotopes of Elementname, should be tagged with (or have {{Chemical Element}}/{{Chemical Element Isotope}} converted to the new template) {{WP Elements|isotope=yes}}. The parameters |class= and |importance= should be imported when possible, and be written if not present. The parameter |peer-review=yes should be imported if present, but otherwise should be removed. If the pages are redirects, they should be tagged with |class=redirect.
Categorizing
Deprecation of old banners

That was a mouthful. Headbomb {ταλκκοντριβς – WP Physics} 21:09, 6 June 2009 (UTC)

BRFA filed at Wikipedia:Bots/Requests for approval/Erik9bot 10. Erik9 (talk) 02:26, 8 June 2009 (UTC)

Press multi adder

I'd like to request a bot that would go through the news and discover when links are made to wikipedia articles and create a press multi on the talk page. Ideally, the bot would scrape google news and see which articles contain a wikipedia link. Does anyone support this?Smallman12q (talk) 20:09, 5 May 2009 (UTC)

Any comments?Smallman12q (talk) 01:32, 11 May 2009 (UTC)
Possible Possible. I suppose it could be done and would be a decent idea, but I'm not going to do it. I already have one bot that's in trial, one whose case is open, and one that I'm coding right now. I'd do it, but I have too many things to handle right now. Sorry. The Earwig (Talk | Contributions) 01:40, 11 May 2009 (UTC)
Well at least I know its possible now=D. Smallman12q (talk) 14:18, 16 May 2009 (UTC)
I like the idea-I'm going to have a look at it over the next 1/2/3 weeks! Anyone else welcome to help! dottydotdot (talk) 20:01, 19 May 2009 (UTC)
Although, having had a quick look into it, it looks like that's already being done on some articles-For instance the Ted Kennedy article & the Maurice Jarre article talk pages, so probably not needed. dottydotdot (talk) 20:36, 19 May 2009 (UTC)
What do you mean, "It's already being done on some articles?" This request is for a bot to scan Google News (or something) and find Wikipedia links, not for users to discover links themselves. Just because Ted Kennedy has a note on it about being linked to on MSNBC doesn't mean every link will be found. Am I correct? The Earwig (Talk | Editor review) 20:44, 19 May 2009 (UTC)

(New indent)No I suppose-I think I just assumed, well I don't really know!(I was a bit rushed)! OK, I'll keep looking! dottydotdot (talk) 07:27, 20 May 2009 (UTC)

I'm looking at Google News currently & it would appear that it is very rare for news organisations to link directly to a Wikipedia article-[9]. Any ideas on how best to find when news organisations are talking about a specific article/maybe I've missed something in Google news search etc.?! dottydotdot (talk) 11:10, 20 May 2009 (UTC)
Google tends not to pick them up as the links are not explicit but rather as a hyperlink with a different set of text(I don't remember the term). For starters, have a look at Wikipedia in Guardian and wikipedia in New York Times. An example of an article containing wikilinks is example.Smallman12q (talk) 01:28, 25 May 2009 (UTC)

Regardless, accessing Google News for links to Wikipedia via a bot would be a breach of their Terms of Service:

"You agree not to access (or attempt to access) any of the Services by any means other than through the interface that is provided by Google, unless you have been specifically allowed to do so in a separate agreement with Google. You specifically agree not to access (or attempt to access) any of the Services through any automated means (including use of scripts or web crawlers) and shall ensure that you comply with the instructions set out in any robots.txt file present on the Services." — Google Terms of Service, Section 5.3

You'd have to do it manually, as Google does not issue API keys anymore. Another option is to try another news search engine, such as Yahoo News, that does give out API keys. I ran into the same problem with EarwigBot II (BRFA · contribs · actions log · block log · flag log · user rights), as did Coren with CorenSearchBot (BRFA · contribs · actions log · block log · flag log · user rights). I hope that helps! The Earwig (Talk | Editor review) 02:58, 25 May 2009 (UTC)

Another option would be rather than to scrape it off the search engines, one could scrape it directly off the news sites (the top 20 or so), but yahoo news would probably be best. So, are there any takers yet?Smallman12q (talk) 02:29, 30 May 2009 (UTC)
I would do this, but I can't. I'm currently using the Yahoo API to run EarwigBot II, so if I write another bot to use the Yahoo API, I will be exceeding Yahoo's maximum query limit. I can certainly help another user write the code, though. The Earwig (Talk | Editor review) 02:35, 30 May 2009 (UTC)
Perhaps you could write the code and then someone else (a willing admin) could run it. Or you could write it in php and it could run on the toolserver? Smallman12q (talk) 18:00, 30 May 2009 (UTC)
I'm pretty much a beginner when it comes to PHP, so that might be a little difficult, and I don't have a toolserver account. I'd much rather do it in Python, because I already have some of the code for it. Hopefully I can get another user to run it for me. Let's see how this turns out: Coding... The Earwig (Talk | Editor review) 18:09, 30 May 2009 (UTC)
Scratch that, I can run the bot. Yahoo issues multiple API keys to the same user. The bot will be run as EarwigBot III. The Earwig (Talk | Editor review) 18:36, 30 May 2009 (UTC)

Still doing... Most of the code is done. I just have to finish writing two of the functions (query() and makechanges()), and then I'll put up a BRFA for it. The Earwig (Talk | Editor review) 23:56, 30 May 2009 (UTC)

I would be interested in seeing the code once it's done if that's possible? Dottydotdot (talk) 00:01, 31 May 2009 (UTC)
Scrap that, found it! Thanks. Dottydotdot (talk) 00:04, 31 May 2009 (UTC)
Is it possible to also write a bot for Template:Onlinesource for articles referencing wikipedia articles.Smallman12q (talk) 18:04, 31 May 2009 (UTC)
Please note that it is practically Impossible to get a bot to distinguish between Wikipedia being mentioned and Wikipedia being cited: web pages are simply built too differently from each other, making it very difficult for the bot to identify one over the other. I was thinking that the bot would simply use {{press}}, and if it should actually use {{onlinesource}}, another user could correct it. Maybe it would be possible to give {{press}} a parameter, such as {{press|bot=yes}}, identifying it as done automatically. Then, users viewing the talk page could notice it, and verify it if there was actually a citing on the page. Does anyone support this? The Earwig (Talk | Editor review) 01:50, 1 June 2009 (UTC)
Yep, I don't know how many people will actually bother but just something like that so they know it was done with a bot. Good work! Dotty••|TALK 13:48, 2 June 2009 (UTC)

I'm almost done. One of the scripts is complete, while the other has two remaining elements to be coded before it will be ready. I should have been done sooner, but some unexpected work came up, and I didn't have time to finish the code. I'll definitely be done by the week's end. The Earwig (Talk | Editor review) 20:01, 2 June 2009 (UTC)

Y Done, because it's Declined Not a good task for a bot.. I'm sorry I didn't notice this eariler. The code is not the problem; the problem is that news organizations rarely mention a link to Wikipedia (such as http://en.wikipedia.org/wiki/Foo or Foo on Wikipedia). Yahoo has a grand total of two results throughout the past month that the bot would actually be able to do something with— and both of them are rather insignifigant findings, as links on discussion pages, et cetera. A user could occasionally use that query, [10], to find results, but asking a bot to do it is a simple waste of resources. Really sorry here, I know I spent time on this, but I didn't realize until recently that it was acutally pointless. If someone wants to resurrect this, and they want my finished source code, I'll be happy to give it to them, but I'm not going to run a bot that will only find one or two links per month, when it could be done perfectly by a human. It was certainly a good idea, but it seems it didn't look at it carefully enough. My regards for one last time, The Earwig (Talk | Editor review) 23:57, 5 June 2009 (UTC)
Are you positive that its mentioned this rarely? I was expecting about 100-800 a month. Have you considered using another search engine such as Bing API?

I did a Google search for "wikipedia.org" for May and June ...this is what I [ http://news.google.com/news?pz=1&ned=us&hl=en&q=%22wikipedia.org%22&as_drrb=b&as_minm=6&as_mind=1&as_maxm=6&as_maxd=8 found]

  1. D-Day holiday of remembrance, Clovis News Journal
  2. D-Day, Denver and the Greatest Generation, Examiner.com
  3. G20 Meeting And Mr. Yar’Adua’s Lamentation - 1, Nigeria News
  4. G20 Meeting And Mr. Yar’Adua’s Lamentation - 2, Nigeria News
  5. AllHipHop.com Selects Kaltura's Open Source Online Video Platform for Full Media Management, PR-USA
  6. From NeXTSTEP to Cocoa: Erik Buck on the Development of Cocoa and Objective-C, InformIt
  7. Mandatory voting: A good way to increase turnout? ,Daily Kos
  1. Can't get anything past Wikipedia .., Politics in Minnesota...this one only refers to wikipedia.org

There are many more that have a link title or are referred such as

  1. Wikipedia to Scientologists: Edit this, suckers, SFGate which has a link to the scientology arbitration case through http://www.idg.com/www/rd.nsf/rd?readform&u=http://en.wikipedia.org/wiki/Wikipedia:Requests_for_arbitration/Scientology
  2. Wikipedia Bans Scientology Church’s Edits,Wall Street Journal Digits Blog which links to http://en.wikipedia.org/wiki/Wikipedia:Requests_for_arbitration/Scientology and a wikimedia user: http://wikimediafoundation.org/wiki/User:JayWalsh
  3. The Christian Science Monitor horizons blog, which links to http://en.wikipedia.org/wiki/Scientology

I believe that either the yahoo API isn't being used correctly, or simply that yahoo has feeds from fewer sites. Perhaps you made a mistake somewhere. But on google news and bing news, I get results. I am still on a wikibreak, and have responded to show that you probably made a mistake somewhere, and as such, I hope you won't abandon the bot. Thanks. Smallman12q (talk) 00:20, 8 June 2009 (UTC)

Now I'm really confused! I haven't abandoned the bot, don't worry. I'll see what I can do about this when I have time, preferrably before Tuesday. My idea is to change the query from http://en.wikipedia.org/wiki/ to en.wikipedia.org/wiki. That should avoid some errors, if I am correct. I'll get back to you as soon as possible. Thanks, The Earwig (Talk | Editor review) 01:03, 8 June 2009 (UTC)
Many of those do not actually discuss the Wikipedia article in the article, they merely link to it as a source for background information. [11] just copied and pasted a sentence from one of our articles. The Scientology ones and [12] are the only ones that actually discuss the article. If the bot used the same criteria to add those, it would be a 63% error rate. Mr.Z-man 02:10, 8 June 2009 (UTC)
Well, as I understand the article's either need a link to a wikipedia article or they need to be about a wikipedia article/event to qualify...however you interpret "This article has been mentioned by a media organisation".Smallman12q (talk) 10:37, 8 June 2009 (UTC)

Would scraping Wikipedia's referer logs be of any use? — Dispenser 17:22, 8 June 2009 (UTC)

I think Mr.Z-man brings up an excellent point. I'm not willing to run a bot that has a potential to make edits with a 63% error rate— we really need to refine this first. To be less error-prone, per the statistics provided, {{onlinesource}} might be a better template to use, even though {{press}} is written with a broader audience in mind. Here's another idea: I didn't want to do this, because I absolutely despise this kind of thing when it comes to unsupervised bots, but what about if the bot simply produced a report in the userspace and didn't tag any talk pages at all? It would require a user to manually review each one, which I don't think will work at all, and may produce a backlog. It will, however, lower the error rate to virtually zero. What do we think of these ideas?

This is why I did not want to run this bot initally, because I knew there would be some annoying issues to sort through. Hopefully, there's some way to get this to work. I have the code, but there's still some things we need to do before this could work. Thanks, The Earwig (Talk | Editor review) 21:57, 8 June 2009 (UTC)

It could post to a userpage, and then invalid links would be crossed out. Like you said it would produce a backlog, but I'm sure there are way to reduce the error rate. Perhaps questionable links only would be posted to the userpage?Smallman12q (talk) 23:44, 9 June 2009 (UTC)
One possibility would be to only post things that have "wikipedia" or "wiki" in the actual title of the article. If the samples above are representative, that would work perfectly. Mr.Z-man 23:54, 9 June 2009 (UTC)
Both of these seem like good suggestions. I just thought of something else: what if the bot searched for "wiki" in the title like Mr.Z-man suggested, but if it didn't find it, instead of logging it to a userpage, the bot simply used {{onlinesource}}? If "wiki" (including "wikipedia") is in the title, we can be almost certain that the article is directly related to Wikipedia, in which case, {{press}} would be appropriate. If the page does not have Wikipedia in the title or summary, we can assume that it uses Wikipedia as a source in something completely unrelated to the site itself, in which case {{onlinesource}} would be more appropriate. We would still need a limited confirmation system, preferrably with a bot=yes parameter, but that might work. The Earwig (Talk | Editor review) 03:59, 10 June 2009 (UTC)
I'm still not convinced the {{onlinesource}} thing is a good idea. I mean, do we really care that the Clovis News Journal linked to 6 different articles in their article? Do we really need a template on Talk:CNN because of a press release on PR-USA.net? Mr.Z-man 06:56, 10 June 2009 (UTC)

Bot to regularly save data from Special:Statistics

As discussed here, it would be useful to see the evolution of the data from Special:Statistics. I'd like to request a bot regularly saving this data in Wikipedia space, so we can have regular data for each parameter, then create graphs, etc, with the exceptions of Founder, Stewards, Importers, Transwiki importers and Uploaders, as they are not changing or unused. The first eight parameters change more often, so they may need to be saved more often than others, say every day, while others maybe every week. Cenarium (talk) 13:42, 27 May 2009 (UTC)

Hello, this is a pretty easy task and I'd be glad to do it. I just need to know how/where you want the data stored. The bot could generate some graphs as well. One possibility is to have a page with some stats graphs and a link to the raw data,which could be saved in a very simple (text) database so that anyone can download and analyze the data however they see fit. --ThaddeusB (talk) 20:04, 27 May 2009 (UTC)
Thanks. I agree it would be nice to also provide the raw data, while presenting it in an accessible way for general readers on a single page. I don't know for the page name, maybe Wikipedia:Statistics over time. So the bot would update the graphs regularly when the value changes ? Would it also make some kind of tables ? Cenarium (talk) 15:40, 3 June 2009 (UTC)
Sorry, I didn't notice your reply until just now. To answer your questions, yes it would update the graphs and yes it would also but the data in a table. I have a couple other things on my plate at the moment, but this is a pretty simply task technically speaking so I will try to squeeze writing the code in within the next few days. --ThaddeusB (talk) 18:05, 8 June 2009 (UTC)
Thanks. Cenarium (talk) 17:06, 12 June 2009 (UTC)

A daily snapshot is already being recorded by a bot on this page. The stats go back to November 2008. Not sure if that's what you're looking for, but there you go. --Andrew Kelly (talk) 03:47, 10 June 2009 (UTC)

It lists only articles, pages, edits, users, admins and uploads, the other values are of interest too, as well as updated graphs. Cenarium (talk) 17:06, 12 June 2009 (UTC)

WikiProject Star Trek

Template:Startrekproject is out on a whole lot of WP:TREK pages, redirecting to the current template, at Template:WikiProject Star Trek. If possible, can a bot replace the old template with the new one on all pages, but LEAVE the existing quality and importance ratings in place? -- Aatrek / TALK 18:15, 9 June 2009 (UTC)

I'm afraid I don't understand the purpose of this. Aren't you trying to fix template redirects that aren't broken? The Earwig (Talk | Editor review) 18:18, 9 June 2009 (UTC)
(edit conflict) Seems unnecessary - what's the benefit? –xenotalk 18:18, 9 June 2009 (UTC)
I was just looking to do some cleanup. If it's unnecessary (and won't affect the project's article alerts), then never mind, I guess! -- Aatrek / TALK 18:20, 9 June 2009 (UTC)
 Not done per above. The redirects are fine and shouldn't interfere with most processes. If there is already re-assessment being done, then the redirect could be replaced, otherwise it would be an unnecessary edit with no visual change. –xenotalk 18:22, 9 June 2009 (UTC)

Sardinian Wikipedia

…is very small.

Bonjour,

Is an English bot allowed to work in other Wikipedias ? Please, see here and here. Thanks.

Budelberger (   ) 15:09, 11 June 2009 (UTC).

Each Wikipedia sets their own policy about bots, so you will have to check locally. --ThaddeusB (talk) 15:22, 11 June 2009 (UTC)

Bot for searching

Where can I find a bot that only grabs pages and searches text inside? For example, I have a list of 50 articles, and I want to see which ones contain „string”. So I need a bot that just reads, and does not save, does not even need to authenticate. Thanks Ark25 (talk) 18:52, 11 June 2009 (UTC)

You might want look into the possibility of using AutoWikiBrowser. That's run through a list quickly (pre-parse mode). Skip if does not contain "string". - Jarry1250 (t, c) 18:58, 11 June 2009 (UTC)
Thanks, I haven't noticed the "skip" option in AWB before. Another question: is it possible to have variables in AWB? I have a list of 200 unexistant pages, I want to create them all, but the content must contain part of their name (page Template:abc-stub must contain the string "abc", page Template:xyz-stub must contain the string "xyz"). How can I automatize that? Ark25 (talk) 13:41, 13 June 2009 (UTC)
The answer is "yes". It's going to involve %%pagename%% and some find-replacing. I'll give it a go myself. - Jarry1250 (t, c) 15:35, 13 June 2009 (UTC)

Thanks. Still, I need some tool to fetch the codes of the pages. Or can AWB save the content of pages to files? Sometimes I do not need to modify the pages, just need to see what is the code, and check which ones contain a certain string. Ark25 (talk) 10:36, 15 June 2009 (UTC)

Sorry, I got rather distracted. If you didn't manage to work the previous request out, it would be to have some sort of placeholder string in your append (eg. "##FOOBAR##@"), then have one find/replace "##FOOBAR##" -> "%%pagename%%" , then underneath another of "-stub@" -> "". Maybe try logging to file for your other request? I've never used it myself. - Jarry1250 (t, c) 10:54, 15 June 2009 (UTC)
Thanks. I can save the log to file, but adds a string like "AWB: Skipped: Page contains: string". Does not include the page name :D. Where should I add that feature request? (to add page names in skipped log) Ark25 (talk) 20:16, 15 June 2009 (UTC)

Wikipedia talk:Manual of Style/Archive

Does someone have a Bot that can go through the 100+ archives and add

{{talkarchive}}
{{archive-nav}}

to the top of each page, where xx is the archive page number (without the extra markup - tl and br)? Most or even all of them already have talkarchive, few have archive-nav. I'm up to 19 but realized there may be an easier solution. 199.125.109.126 (talk) 01:43, 14 June 2009 (UTC)

 Done. Thanks Chris G. I don't know if you type fast or used a bot, but thanks. 199.125.109.126 (talk) 03:40, 14 June 2009 (UTC)

My new Bot

Please make my bot called GameBot please. Thanks, WimpyKid (talk) 11:45, 15 June 2009 (UTC)

no Declined Q T C 11:56, 15 June 2009 (UTC)

Bot Request

Please make my bot "WorldBot"

It will help out newcomers, write on wikipedians' talk pages, and create new articles.

Thanks, WimpyKid (talk) 00:49, 16 June 2009 (UTC)

Please stop making ridiculously vague bot requests. If you want someone to run a bot for you, you will have to come up with a good idea and describe specifically what the bot is to do. If you are wanting someone to write a bot for you to run, you're unlikely to have much luck here. Anomie 02:47, 16 June 2009 (UTC)
N Not done because the Idea is not well explained. per above comment by Anomie and for not following the instructions carefully enough. We need to know a lot more about your bot for this to work. You need to elaborate. What? I can't standardize the closings of requests by using a template? The Earwig (Talk | Editor review) 03:20, 16 June 2009 (UTC)
It's fine with me if you want to use a template. Anomie 03:44, 16 June 2009 (UTC)

Project tagging for the Food and Drink WikiProject

Hello, can someone go through all of the articles in Category:Food and drink stubs and tag those articles that do not have the {{WikiProject Food and drink}} banner on their talk page? Also, if there are any of these stubs that do have the banner but do not have a class assignment please add it (stub of course).

Thanks, Jeremy (blah blah) 06:11, 16 June 2009 (UTC)

Coding... LegoKontribsTalkM 20:00, 16 June 2009 (UTC)

Foundation and similar dates, in infoboxes

[I'm relisting this August 2008 request (including subsequent revisions), as the editor who said he would make the edits has not done so, nor replied to many enquiries as to progress (due at least in part to understandable family matters). Since there are hundreds of templates in need of this overdue change, and this currently emitting broken microformats, the need may be considered pressing]

To add "founded", "founded date" or "foundation", "released", "first broadcast" and similar dates to an infobox' hCard microformat, existing dates need to be converted to use {{Start date}}.

I've compiled a list of relevant infoboxes at User:Pigsonthewing/to-do#Date conversions.

Thank you. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 12:10, 16 June 2009 (UTC)

User lua

Hi, This template: {{user lua}} was recently moved to {{user LUA}} because the letters "lua" referred to a language code. However that template was not modified on people's userpages and affects the following people [13] and needs to be modified. The full discussion can be found here. Thanks. --Amazonien (talk) 05:49, 17 June 2009 (UTC)

Seems to fall under WP:R2D. No benefit in fixing. –xenotalk 17:00, 17 June 2009 (UTC)
No, the template for Lua programming language was moved from {{user lua}} to {{user LUA}} to make way for a template for Tshiluba language. I'll have AnomieBOT take care of this, as well {{user lua-1}}, {{user lua-2}}, and {{user lua-3}}. And next time, Amazonien, please have the bot do the replacement before replacing the old template. Anomie 17:14, 17 June 2009 (UTC)
I probably should have RTFD before commenting ;p –xenotalk 17:52, 17 June 2009 (UTC)
Y Done Looks like there are two you should take care of manually: [14][15] Anomie 20:46, 17 June 2009 (UTC)

Needed-class cleanup.

Could someone run a bot on articles found in Category:Needed-Class articles (and subcats) and update the ratings? It seems like most of them are redirects or disambigs. Or is this normal? Headbomb {ταλκκοντριβς – WP Physics} 15:14, 17 June 2009 (UTC)

Moving translated page template

The {{translated page}} template is often added incorrectly to articles, rather than to the talk page (in violation of WP:SELF). I've been fixing them manually when they appear, but if there's a bot that could do this automatically on a regular basis (say once or twice a week) it would save me the effort. This could probably be done by an existing template clean-up bots. The task would be as follows:

  • For each article listed in here...
    • Note the parameters used in the {{translated page}} template in the article
    • Remove the template, with an edit summary along the lines of "Moving translated page template to talk page as per WP:SELF"
    • Add the template to the top of the corresponding talk page, with a similar edit summary

Are there any exsiting bot owners who are willing to add this task? —  Tivedshambo  (t/c) 06:50, 17 June 2009 (UTC)

Coding... I'll do it. BTW, that template needs updating for the recent licensing update. Anomie 17:47, 17 June 2009 (UTC)
Oops - I'd missed the reference to GFDL. Thanks! —  Tivedshambo  (t/c) 05:24, 18 June 2009 (UTC)
BRFA filed Anomie 03:06, 19 June 2009 (UTC)

Adminbot to repair cut-and-paste moves

was: Removing Category:Possible cut-and-paste moves

This is a pretty simple request, and I could do it myself, but if memory serves, someone else already has an approved bot that does this.

The short version of the request is: I need a bot to go through the pages in Category:Possible cut-and-paste moves and remove said category from the page. All of the involved pages will be redirects. There's about 2,500 pages involved.

At Magioladitis's request, I had the bot tack on this category to any redirects, so that a person could go through and figure out whether or not they needed to be history merged. I since came up with a much better method (see User:Mikaey/Possible cut-and-paste moves), so now I just need a bot to go through and detag all these pages for me.

Thanks, Matt (talk) 09:38, 18 June 2009 (UTC)

Please note the discussion at Wikipedia:Administrators' noticeboard#Admins with history merging experience wanted before anything is done with this request. ···日本穣? · Talk to Nihonjoe 16:34, 18 June 2009 (UTC)
The scope of this request should be expanded to be an adminbot that actually fixes the cut and paste moves that Mikaey's bot reports with 100% confidence (removing this category where present). I've changed the heading to pique some interest and get the botwriters' creative juices flowing. –xenotalk 21:55, 18 June 2009 (UTC)

I don't usually like history merges in general. If there's any overlap between the two pages, the page history gets mangled, diffs become confusing and unreadable, etc. There's a core feature (disabled by default) that was written by one of Wikimedia's contractors to deal with split histories safely (see "#$wgGroupPermissions['sysop']['mergehistory'] = true;" in DefaultSettings.php). If possible, I'd much rather seem that activated than an ad hoc bot solution. For what it's worth. Perhaps can someone can convince me this is a good idea in a bot request given appropriate safety measures.... --MZMcBride (talk) 02:46, 19 June 2009 (UTC)

capitalization fixes in articles using {{nihongo}}

I'd like a bot to go through all articles using {{Nihongo}} and check for capitalization errors in the content of that template. For example, in Fukuyama University, the {{nihongo}} template appears like this: {{nihongo|'''Fukuyama University'''|福山大学|Fukuyama daigaku}}. The bot would need to check the first parameter ('''Fukuyama University''') and check to see if the contents of the third parameter used the same capitalization. In this case, it is using "Fukuyama daigaku" instead of "Fukuyama Daigaku". The bot would change the capitalization of the contents of the third parameter if they did not match, in this case changing them to "Fukuyama Daigaku". I hope that makes sense. Can this be done? Thanks! ···日本穣? · Talk to Nihonjoe 21:34, 20 June 2009 (UTC)

It does not correspond in film titles and I am afraid that it can not be solved by bot. Can be? For example An Osaka Story, Freelance Samurai, Zenigata Heiji: Human-skin Spider, and others. --Snek01 (talk) 22:32, 20 June 2009 (UTC)
Well, it mostly corresponds. I fixed the first two you mentioned. Perhaps it could instead make a list of them somewhere like Wikipedia:WikiProject Japan/Capitalization fixes so people could go through them manually? ···日本穣? · Talk to Nihonjoe 22:48, 20 June 2009 (UTC)

Redirect creation for WP Elements.

In the recent overhaul of WP:Elements templates and categories I've started created redirects to main articles for each isotopes, but this is very tedious, as there are lots of isotopes (thousands).

For example, Hydrogen-7 redirects to Isotopes of hydrogen#Hydrogen-7 and is categorized in Category:Isotopes of hydrogen. Such redirects need to be recreate for all isotopes found in the lists of isotopes (don't bother with metastable isotopes for now).

The text for creation is (pay attention to capitalization please)

#REDIRECT[[Isotopes of elementname#Elementname-XX]]

[[Category:Isotopes of elementname]]

Here are the Elementnames and the range of XX that needs to be created.

Redirect ranges per elements
  • Scandium (36-60)
  • Titanium (38-63)
  • Vanadium (40-65)
  • Chromium (42-67)
  • Manganese (44-69)
  • Iron (45-72)
  • Cobalt (47-75)
  • Nickel (48-78)
  • Copper (52-80)
  • Zinc (54-83)
  • Gallium (56-86)
  • Germanium (58-89)
  • Arsenic (60-92)
  • Selenium (65-94)
  • Bromine (67-97)
  • Krypton (69-100) (sort as [[Category:Isotopes of elementame|Krypton-069]], and so on.
  • Rubidium (71-102)
  • Strontium (73-105)
  • Yttrium (76-108)
  • Zirconium (78-110)
  • Niobium (81-113)
  • Molybdenum (83-115)
  • Technetium (85-118)
  • Ruthenium (87-120)
  • Rhodium (89-122)
  • Palladium (91-124)
  • Silver (93-130)
  • Cadmium (95-132)
  • Indium (97-135)
  • Tin (99-137)
  • Antimony (103-139) (resume normal sorting)
  • Tellurium (105-142)
  • Iodine (108-144)
  • Xenon (110-147)
  • Caesium (112-151)
  • Barium (114-153)
  • Lanthanum (117-155)
  • Cerium (119-157)
  • Praseodymium (121-159)
  • Neodymium (124-161)
  • Promethium (126-163)
  • Samarium (128-165)
  • Europium (130-167)
  • Gadolinium (134-169)
  • Terbium (135-171)
  • Dysprosium (138-173)
  • Holmium (140-175)
  • Erbium (143-177)
  • Thulium (145-179)
  • Ytterbium (148-181)
  • Lutetium (150-184)
  • Hafnium (153-188)
  • Tantalum (155-190)
  • Tungsten (158-192)
  • Rhenium (160-194)
  • Osmium (162-197)
  • Iridium (164-199)
  • Platinum (166-202)
  • Gold (169-205)
  • Mercury (171-210)
  • Thallium (176-212)
  • Lead (178-215)
  • Bismuth (184-218)
  • Polonium (188-220)
  • Astatine (191-223)
  • Radon (195-228)
  • Francium (199-232)
  • Radium (202-234)
  • Actinium (206-236)
  • Thorium (209-238)
  • Protactinium (212-240)
  • Uranium (217-242)
  • Neptunium (225-244)
  • Plutonium (228-247)
  • Americium (231-249)
  • Curium (232-252)
  • Berkelium (235-254)
  • Californium (237-256)
  • Einsteinium (240-258)
  • Fermium (242-260)
  • Mendelevium (245-262)
  • Nobelium (248-264)
  • Lawrencium (251-266)
  • Rutherfordium (253-268)
  • Dubnium (255-270)
  • Seaborgium (258-273)
  • Bohrium (260-275)
  • Hassnium (263-277)
  • Meitnerium (265-279)
  • Darmstadtium (267-281)
  • Roentgenium (272-283)
  • Ununbium (277-285)
  • Ununtrium (278-287)
  • Ununquadium (285-289)
  • Ununpentium (287-291)
  • Ununhexium (289-293)
  • Ununseptium (291-292)
  • Ununoctium (293-294)

More to come.Headbomb {ταλκκοντριβς – WP Physics} 02:29, 18 June 2009 (UTC)

The list is now complete.Headbomb {ταλκκοντριβς – WP Physics} 14:37, 18 June 2009 (UTC)
BRFA filed Anomie 03:05, 19 June 2009 (UTC)
Y Done 2556 redirects created, 268 already existed. I also manually fixed the sort keys on the existing articles/redirects that needed "Element-0XX" to sort correctly. Anomie 16:07, 20 June 2009 (UTC)
Great. Now if you could tag 'em as detailed in the BRFA, that would be great. No rush, so take your time. Headbomb {ταλκκοντριβς – WP Physics} 22:41, 20 June 2009 (UTC)
Oh yeah, I forgot. Thanks for reminding me. Anomie 03:03, 21 June 2009 (UTC)

Advance questions for a WikiProject NRHP banner placement bot

Hi, i have a bot request partially drafted at Wikipedia:WikiProject National Register of Historic Places/Botrequest2, am hoping to get NRHP wikiproject banner added to many articles, perhaps thousands, lacking it currently. Currently the wikiproject has about 20,000 articles tagged with its banner, i am thinking a bot could add several thousand by addressing articles in categories and having certain titles. And I am hoping a bot could complete a day or two before July 4, end date of a cleanup drive we have going. It is taking time to get all the categories stated and checked carefully for the bot request. I wonder if i could ask:

  1. Is the format of the draft request okay?
  2. what would be the leadtime required, to run this bot request, so that it would be done before July 4? if i am asking in advance, and if it takes me a couple more days to finish the bot request, will that be okay? I have never gotten a bot run before, sorry for my newbie-esque questions. I am just eager to have this done to be able to announce new counts of articles for the wikiproject as part of other announcements on July 4.
  3. And, can a bot find all instances of a given phrase ("National Register of Historic Places listings in") in any wikipedia title, whether the article is in a category or not?
  4. And, for a wikiproject banner-placing bot, what should be done for cases where an article is a redirect? I don't know if placing the wikiproject banner is necessary or helpful for redirects.

Any responses will be appreciated. doncram (talk) 11:38, 21 June 2009 (UTC)

As for timeframe, if you get one of the pre-approved wikiproject tagging bots to do the run it just depends on how many pages end up needing to be checked and how badly the Wikipedia servers are overloaded during the run. You've got about 12 days, which should be plenty of time if there is nothing too odd about the request (see reply #1). Replies:
  1. Format looks fine. I'm not sure whether you're asking for "and subcategories" or not, because you don't actually ask for subcategories but you do give lists of categories to skip. If at all possible, don't ask for "and subcategories" as it ends up taking that much longer for the whole list of subcategories to be generated and then checked by your project.
  2. "All titles containing a phrase", not really; someone with toolserver access would have to generate a list of those titles if it can be done at all. But "All titles beginning with the phrase" can be done easily enough using the API equivalent of Special:PrefixIndex.
  3. It's up to the project. Some find it useful to know what their redirects are, while some think tagging redirects is worse than useless.
Also, as far as pages without existing assessments go: some bots can determine whether an article is a stub, redirect, or disambiguation page and automatically assess (or skip) those pages. Bots can also copy the highest/lowest assessment that any other Wikiproject has tagged the page with if your project wants that done, or they can just tag it with no assessment. Anomie 14:08, 21 June 2009 (UTC)
Thanks, just the info i needed. Per instructions on this page on what makes a successful bot request vs. not, I was not going to ask for "and subcategories". I am building up an explicit list of all categories in the NRHP wikiproject to ask directly, and indeed finding some problem ones that need to be excluded, which is what is taking time. And, all titles beginning with a phrase will work fine, too. Thanks! doncram (talk) 17:46, 21 June 2009 (UTC)

Six Degrees of Wikipedia Bot

I'd like to make a simple Python script which, given two Wikipedia articles, attempted to find a way to get from one to the other via a series of intermediary links. It would not have to log in as a user and only has to be able to browse Wikipedia pages normally as a user would, and would be completely harmless in every way. —Preceding unsigned comment added by 124.170.152.177 (talkcontribs) 14:44, 20 June 2009 (UTC)

To what end? It seems like this bot would query the API and cause pages to be served for no particular reason, i.e. it could create a drain on the servers. –xenotalk 14:46, 20 June 2009 (UTC)
This is against our policy, as it is required that bots be; useful, and do not consume resources unnecessarily. Sorry, but not a good task - Kingpin13 (talk) 14:47, 20 June 2009 (UTC)
This could be doable with a third-party script, e.g., a program hosted on another wesbite which searches Wikipedia pages for wikilinks. –Drilnoth (T • C • L) 15:09, 20 June 2009 (UTC)
Something like the ones listed at the bottom of Wikipedia:Six degrees of Wikipedia? Mr.Z-man 15:21, 20 June 2009 (UTC)
River has stopped maintaining it since the database format change a while few years back and has since removed it altogether. The source code is available in the MediaWiki SVN. I think its coded in java in the back end (table compiler) and uses python for the front end (table queries and HTML). It would be a good choice for a stable project as people seemly keep reinventing it. — Dispenser 22:25, 20 June 2009 (UTC)
I suggest you use a database dump rather than waste actual resources. --ThaddeusB (talk) 15:40, 20 June 2009 (UTC)
See [16]. Headbomb {ταλκκοντριβς – WP Physics} 00:33, 22 June 2009 (UTC)