Jump to content

User talk:Cyberpower678/Archive 38

Page contents not supported in other languages.
This user uses Twinkle to fight vandalism.
This user has 70% energy left.
This user has administrator privileges on the English Wikipedia.
This user is a global renamer.
This user is an edit filter manager on the English Wikipedia.
This user has interface administrator privileges on the English Wikipedia.
Trout this user
From Wikipedia, the free encyclopedia
Archive 35Archive 36Archive 37Archive 38Archive 39Archive 40Archive 45
Senior Editor II
Senior Editor II

memoire-du-cyclisme.net

Hi Cyberpower678, it looks as if memoire-du-cyclisme.net has been dead since 2012. Can all links from that year be rescued? Would be great if your bot could run through and archive them from that year. This site is heavily used in cycling-related articles. In my case it was 1990 Tour de France. I'm not sure why this hasn't been picked up sooner. Thanks. BaldBoris 15:21, 5 September 2016 (UTC)

Am I missing something? I'm only asking for help because this site is used to cite the most important info, in my case, the Tour de Frances, as I'm sorting all of them out. Can you let me know if this can be done with the InternetArchiveBot? Otherwise I guess I'll have to individually archive them? BaldBoris 16:40, 6 September 2016 (UTC)

I'm not the bot operator (who may be too busy to get back to you), but I did look at your article on the 1990 Tour. There was one reference there that linked to the memoire-du-cyclisme website. The link was archived at archive.org, but there was no dead-link template placed at the article, which is what I think the bot has to see in order to rescue the link. I don't know if it's possible for the bot to be able to rescue all relevant links based on reporting a particular website being down. Dhtwiki (talk) 19:53, 6 September 2016 (UTC)
Fair enough, but it does say chat in his sig. memoire-du-cyclisme.net moved to memoire-du-cyclisme.eu some time in 2012, with some parts (including the 1990 Tour page) now needing authorization. If [1] was tagged dead the would the bot go to the most recent live page? If it did, that wouldn't help as it's also down. So basically every link (must be hundreds, maybe more) to memoire-du-cyclisme.net needs to be archived from 2012 or before. Maybe Kaldari could help? BaldBoris 21:01, 6 September 2016 (UTC)
If I add {{dead link}} to them will it fetch an archive before the access date? BaldBoris 21:24, 6 September 2016 (UTC)
  • Do not underestimate IABot. It's a very sophisticated and intelligent piece of code. To answer your question, yes it can and it will attempt to fetch a snapshot as close to the access date as possible. I just have to mark the domain as dead in the DB, and IABot will rescue them all, even with no tag.—cyberpowerChat:Limited Access 21:28, 6 September 2016 (UTC)
I have no doubt it is. That's what I wanted to begin with, but didn't know how to request it. Cheers. BaldBoris 03:00, 7 September 2016 (UTC)
I've marked the entire .net domain for that site as dead. The .eu domain seems to still be working.—cyberpowerChat:Limited Access 21:23, 7 September 2016 (UTC)

Bot signtaure is broken

The bot signature HTML is broken and leaving dirty wiki-markup on every page it appears on, wrecking the display of wiki-markup for all subsequent content on a talk page. The signature fails to close the <sup> and <span> tags. This is extremely discourteous. Please fix this ASAP, thanks. — btphelps (talk to me) (what I've done) 07:11, 7 September 2016 (UTC)

I see no rendering changes, since MW automatically closes the tag when rendering the signature.—cyberpowerChat:Limited Access 19:11, 7 September 2016 (UTC)
If you look at this entry, you can see the following HTML:
Plain text
[[User:Cyberbot II|<sup style="color:green;font-family:Courier">cyberbot II</sup>]]<small><sub style="margin-left:-14.9ex;color:green;font-family:Comic Sans MS">[[User talk:Cyberbot II|<span style="color:green">Talk to my owner</span>]]:Online</sub></small>
As rendered
cyberbot IITalk to my owner:Online
Corrected HTML (in bold}
[[User:Cyberbot II|<sup style="color:green;font-family:Courier">cyberbot II</sup>]]<small><sub style="margin-left:-14.9ex;color:green;font-family:Comic Sans MS">[[User talk:Cyberbot II|<span style="color:green">Talk to my owner</span>]]:Online</sub></small>
The <sup...> and the <span...> tags are not closed. The "As rendered" sig above displays the unclosed tags as highlighted with a crosshatch when viewed in the editor, indicating an error. MW has not closed these tags automatically as expected. The "Corrected HTML" version shows the missing tags in bold. — btphelps (talk to me) (what I've done) 07:27, 8 September 2016 (UTC)

Malfunction of InternetArchiveBot?

This bot is making two "weird" edits, that I don't think are inline with what it is supposed to do, or are actually correct. Both at ...Ye Who Enter Here and 0-8-4 it is adding extra url "crap" as I call it to the webcitation links, which is not needed. Is this correct? Is it attempting to fix another issue I'm not aware of and subsequently adding this text? Thanks for the input. - Favre1fan93 (talk) 22:28, 8 September 2016 (UTC)

Actually this is correct operation. A recent RFC was closed mandating the use of long form URLs. All archive URLs need to have the snapshot time and original URL.—cyberpowerChat:Offline 22:58, 8 September 2016 (UTC)
Can you link me to this RfC please? WP:WEBCITE#Use within Wikipedia does not make any mention of this, and subsequently says the nine digit ID is fine to use. - Favre1fan93 (talk) 23:21, 8 September 2016 (UTC)
WEBCITE is an essay (a non-consensus page) that needs to be updated with results of the RfC Wikipedia_talk:Using_archive.is#RfC:_Should_we_use_short_or_long_format_URLs.3F. -- GreenC 23:39, 8 September 2016 (UTC)
(I just updated WP:WEBCITE#Use within Wikipedia and WP:ARCHIVEIS#Use within Wikipedia - they said something different when Favre1fan93 originally linked to it above). -- GreenC 23:55, 8 September 2016 (UTC)

cyberbot II marked URL as unfit

Hi Cyberpower678, I noticed that cyberbot II made this edit, marking the dead URL as unfit. The archive URL does look like a wild 90's site, but otherwise doesn't seem (to me) to meet the "unfit" definition at Category:CS1 maint: Unfit url. Is Cyberbot advising use of a different site? Is there a list that Cyberbot is checking? Clarification or advice would be appreciated! - Paul2520 (talk) 02:42, 9 September 2016 (UTC)

It simply moved the archive URL to the correct field and hid the original. So the final output was still the same.—cyberpowerChat:Online 11:00, 9 September 2016 (UTC)

Mangling articles

this edit is pretty bad. the bot removes the back half of an HTML comment, but leaves the front half. can you make sure this was the only one? Frietjes (talk) 20:44, 8 September 2016 (UTC)

It would seem that the comment was mistaken as part of the URL. Given how extensive I tested this release, I can safely assume that this is an extraordinarily rare occurrence, given I did not encounter this at all. Nonetheless, I can easily fix this.—cyberpowerChat:Limited Access 21:32, 8 September 2016 (UTC)
Fixed in the next release.—cyberpowerChat:Online 17:21, 10 September 2016 (UTC)

Bot error?

Hello. Please explain how my request here triggered your bot. Was it an error? Thanks. X4n6 (talk) 03:43, 10 September 2016 (UTC)

Nope, you misformatted your request. It was fixed here.—cyberpowerChat:Online 13:32, 10 September 2016 (UTC)
Ok. I thought the objection was content related. X4n6 (talk) 22:47, 10 September 2016 (UTC)

Protocol relative URLs

There are cases of protocol-relative URLs eg. |archiveurl=//web.archive.org/ in Ewa Kopacz. At one time consensus was to use PRURL for Wayback but the last RfC said to convert to https.[2] This can easily be fixed with AWB, but it might mess up the CBDB if the changes are not sent in via SQL. How does the bot handle PRURL's in the CBDB, are they converted to https, retained, skipped? Do you want to add a feature to IABot to do the convert? Should I add it to wayback medic (though it's now 25% done, I could go back over those with only the prurl feature enabled) -- GreenC 14:45, 6 September 2016 (UTC)

The newest update to IABot converts them automatically internally, and externally if enabled on the config page.—cyberpowerChat:Limited Access 21:29, 6 September 2016 (UTC)
Excellent. I ran a database regex and found about 18k links in 4600 articles. A complication for WM it's not setup to process prurl links when verifying link status so it skips them. I could run AWB and do the conversion quickly but it sounds like IABot has it covered and might be better to let it discover the links so it keeps the internal in sync with external. Would definitely recommend enabling external conversion. -- GreenC 02:22, 7 September 2016 (UTC)
It wouldn't matter actually. With the new update which I hope to have running in the next day or two, it runs several checks on the URL and converts it before using it. So even if it doesn't edit the conversion still happens internally. This is useful for when users want to submit archive URLs, but submit it in the wrong format. So the checks and conversion is ongoing. Another step up to bot intelligence. :D—cyberpowerChat:Offline 02:26, 7 September 2016 (UTC)
Yeah, that's great. Alright maybe I'll try to fix them on wiki so WM can process the 18k links. I was hoping WM would finish before IAB started up again so it could complete the SQL updates and it would be safe to start removing cbignore tags.. it takes time to verify a million links and send a quarter million edits. Probably another 4-7 days, currently about half done. But if your ready to go, I can try to catch up if IAB halts again by processing the articles from the start-date forward. -- GreenC 13:50, 7 September 2016 (UTC)
The bot has been deployed but it's only running on a slower speed at the moment. Once it's been established that the lingering bugs are non-existent, I'll restore the full speed.—cyberpowerChat:Offline 23:44, 7 September 2016 (UTC)
@Green Cardamom: [https://en.wikipedia.org/wiki/Alaska?diff=prev&oldid=738435220 An example of IABot's handiwork.—cyberpowerChat:Offline 00:24, 9 September 2016 (UTC)
Yeah I'm watching diffs the bot is really amazing work. Still some mistakes with deadurl (setting yes when should be no and other way round) but hopefully that will get cleared up. I don't fully understand what bots and databases are involved with the dead link checking and how IAB interacts. -- GreenC 01:18, 9 September 2016 (UTC)
It's a bit complicated to explain.—cyberpowerChat:Offline 02:09, 11 September 2016 (UTC)

Nomination for deletion of Template:Adminstats/!dea4u

Template:Adminstats/!dea4u has been nominated for deletion. You are invited to comment on the discussion at the template's entry on the Templates for discussion page. ~ Rob13Talk 05:56, 11 September 2016 (UTC)

Cyberbot RFPP clerking

Hi Cyberpower, just a small thing: at RFPP, Cyberbot seems to think that there is 1 (more?) pending request than there actually is. This might be a bug. Best, Airplaneman 10:31, 11 September 2016 (UTC)

User analysis

https://tools.wmflabs.org/xtools-ec/?user=Kudpung&project=en.wikipedia.org is not working. I need some of this info because it required by Arbcom. I have notified the other developers. --Kudpung กุดผึ้ง (talk) 06:34, 11 September 2016 (UTC)

Works for me.—cyberpowerChat:Online 11:26, 11 September 2016 (UTC)

Report

The diffs are looking very good. It sometimes/often has trouble with deadurl=no. [3][4][5][6].. should be yes or not bot unknown. -- GreenC 14:49, 8 September 2016 (UTC)

Actually no it's not. It wasn't tagged as dead initially, nor does the bot see it as dead, yet. Since an archive URL is there that needs to be fixed, it will fix it but keep it marked as alive.—cyberpowerChat:Online 16:51, 8 September 2016 (UTC)
Err, actually I just discovered a bug. :/—cyberpowerChat:Online 16:56, 8 September 2016 (UTC)
Example (first and third edit).[7] Would it be better to default to "bot: unknown" since it doesn't know the link status yet? If an archive URL exists, it's more likely than not due to a dead link, but the bot defaults to the opposite conclusion (more likely than not the link is alive ie. "no"). (maybe this is already changed and I'm looking at a cached batch). -- GreenC 20:49, 8 September 2016 (UTC)
When the deadurl parameter is omitted, it's the same as deadurl=no. Bot:unknown is for hiding the original URL. In any event I found a minor bug, and will have it fixed soon.—cyberpowerChat:Limited Access 21:31, 8 September 2016 (UTC)
Well, I've had my mind boggling on this all day today, and while conceptually, it's easily feasible, with the current code structure it's anything but. I am still at square one here trying to come up with a non-hacking solution.—cyberpowerChat:Offline 02:11, 11 September 2016 (UTC)
Found a solution, and implemented it. Here you go. :D—cyberpowerChat:Online 13:52, 11 September 2016 (UTC)
Excellent! There's nothing worse than thinking a problem can't be fixed.. and nothing better than finding a solution. I'll keep an eye out in the diffs. -- GreenC 15:11, 11 September 2016 (UTC)

Use of /web/

I'm finding cases where "/web/" matters. Example:

  1. https://web.archive.org/20110519131225/http://www.nwkansas.com/cfpwebpages/pdf%20pages%20-%20all/cfp%20pages-pdfs%202001/cfp%20pages%2F10%20Oct/Week%203/%20%20Monday/front.pdf
  2. https://web.archive.org/web/20110519131225/http://www.nwkansas.com/cfpwebpages/pdf%20pages%20-%20all/cfp%20pages-pdfs%202001/cfp%20pages%2F10%20Oct/Week%203/%20%20Monday/front.pdf

The #2 returns a header "HTTP/1.1 400 Invalid URI: noSlash". The API returned #2 (wrong). So this will be a new task of WM to remove "/web/" if the URL isn't working but the non-"/web" version is working. It's already coded going forward.

It also means automatically reformatting existing URLs to "/web" might be a problem. WM2 has been doing this and unaware deleting links with this problem since the new formatted URL didn't verify. I don't know how many probably a couple hundred. I can figure out which ones if IABot will accept an SQL update with new archive URLs that don't contain "/web/". -- GreenC 15:08, 11 September 2016 (UTC)

Actually that is an indication of an issue with the Wayback Machine and should be fix d on their end.—cyberpowerChat:Limited Access 18:19, 11 September 2016 (UTC)
OK I'll report it to them after WM2 is finished and I can collect a data of all links it found so the scale of problem is known. At the same time the robots.txt numbers and anything else the data reveals. Need to write a program to do post-processing of dead links. Right now I have the list of URLs but not the status codes which will be useful to explore. -- GreenC 19:28, 11 September 2016 (UTC)

Bad bot!

https://en.wikipedia.org/w/index.php?title=Malta&oldid=739116927

The above edit by your bot cut out 76kb of article content. Something is amiss. Rhialto (talk) 22:00, 12 September 2016 (UTC)

 FixedcyberpowerChat:Online 16:19, 13 September 2016 (UTC)


FelisLeo/Ronaz

Hey Cyberpower, it looks like the user talk page didn't get moved when renaming this user. clpo13(talk) 16:13, 13 September 2016 (UTC)

@Legoktm: It appears the rename bugged up. This isn't a case of me forgetting to suppress the creation of redirects, but the old user page really didn't get moved when it should have. :p—cyberpowerChat:Online 16:18, 13 September 2016 (UTC)

Appreciated! ronazTalk! 17:04, 13 September 2016 (UTC)

Port 80

Inclusion of :80 is breaking links. Examples:

The :80 is being added by the API. I believe it may be there as convenience information for the calling application and not reflect the actual Wayback link. -- GreenC 01:29, 12 September 2016 (UTC)

In that case that's yet another bug to be reported to them. If the API is returning that information, then it should stand to reason that the URL actually works. :/ Did you report the last bug, yet? I didn't get any CC's.—cyberpowerChat:Online 12:28, 12 September 2016 (UTC)
No in the previous post I said I was waiting till I had more data WM is still running. For port 80, I don't think it's a bug it's a feature - they are intentionally returning port information. However it's not part of the Wayback database URL. It won't break anything by removing :80. but it will break (occasionally) by keeping it. Anyway I'll report this now. -- GreenC 13:11, 12 September 2016 (UTC)
I never meant the reporting of port 80 being a bug, but the non-functional URL being the bug. If it reports a port in the URL, that URL should be expected to work.—cyberpowerChat:Online 13:13, 12 September 2016 (UTC)
Sorry I should have explained that better. -- GreenC 13:29, 12 September 2016 (UTC)

It looks like the problem is more complicated. If port is 80 and the source URL protocol is https then it doesn't work because https is port 443. Example from John Major:

If the API is returning an https URL with port 80 that's a problem. However I can't replicate, the API doesn't convert the source http URL to https. -- GreenC 19:39, 12 September 2016 (UTC)

I don't know what to say. I am certain I never programmed IABot to convert source URLs to https.—cyberpowerChat:Limited Access 20:00, 12 September 2016 (UTC)
If it was caused by the API then WaybackMedic would be seeing it also, every http link it processed would be modified to https. -- GreenC 20:44, 12 September 2016 (UTC)

Could it be caused by a greedy regex when changing https for the archive.org portion it bleeds into the rest of the URL? -- GreenC 18:00, 13 September 2016 (UTC)

It is using a regex '/https?/i' to be exact, when querying from the Wayback API, but it's limited to 1 replacement. So it would only replace the first encounter which is the URL itself and not the original URL.—cyberpowerChat:Limited Access 18:03, 13 September 2016 (UTC)
Ok, so somehow I inadvertently removed the restriction. I have patched this and will be deploying the fix in a moment.—cyberpowerChat:Limited Access 18:20, 13 September 2016 (UTC)
Whew, glad you found something. WM2 can "fix" existing cases by removing :80 which I'll send in SQL. If you think it should change the https to http that would take more work to program but it's probably less than a thousand links. -- GreenC 18:51, 13 September 2016 (UTC)
Also just to make sure, if the archive is already https would it carry into the source http? -- GreenC 18:54, 13 September 2016 (UTC)
No the regex would still match the first. The regex is '/https?/i', so it would always apply.—cyberpowerChat:Limited Access 18:56, 13 September 2016 (UTC)
I think that would be a problem. Try this (click on preg_replace tab). It ends up converting the source URL protocol when the primary protocol is already https. But if you add "^" in the pattern ([http://www.phpliveregex.com/p/h6d link) it works better. -- GreenC 19:25, 13 September 2016 (UTC)
preg_replace has an additional parameter, which limits the number of replacements it can perform. In this case I set it to 1.—cyberpowerChat:Limited Access 19:33, 13 September 2016 (UTC)
In the above example there is only 1 available replacement for the string https://archive.org.../http://source.com.... It will convert http://source.com to https://source.com because https://archive.org is already set to https. -- GreenC 20:23, 13 September 2016 (UTC)
Not quite. You misread the regex.—cyberpowerChat:Limited Access 20:26, 13 September 2016 (UTC)
Still don't see it but ok. -- GreenC 22:36, 13 September 2016 (UTC)
It matches both HTTP and HTTPS case insensitive.—cyberpowerChat:Offline 23:27, 13 September 2016 (UTC)
I know but the option to limit the number of replacements wouldn't count the first https since it's not a replacement (https -> https) but maybe it is considered. -- GreenC 23:52, 13 September 2016 (UTC)
You need to brush up on regex and computer logic my friend. ;-). /https?/i will match http or https, case insensitive, and replace it with https. So https is replaced with https, so it's considered a replacement. :p—cyberpowerChat:Offline 23:56, 13 September 2016 (UTC)
Regex is endless I've never used the PHP variety or a limit before. I would have just used "^" (which seems safer in case the protocol is missing like protocol-relative). -- GreenC 00:05, 14 September 2016 (UTC)
Indeed it is. I can't tell you what percentage of my time I spent debugging IABot stemmed from regex problems.—cyberpowerChat:Offline 00:08, 14 September 2016 (UTC)
I doubt the API will ever return a protocol relative URL.—cyberpowerChat:Offline 00:09, 14 September 2016 (UTC)
Well that's true for API data it will have a protocol. My background is in Awk regex which is its own world but relatively limited compared to PCRE. One day I'd like to learn parsing expression grammar. -- GreenC 00:14, 14 September 2016 (UTC)

InternetArchiveBot possibly setting deadurl incorrectly

Hi Cyberpower678. In this edit, InternetArchiveBot added "deadurl=no" to three references, but in each case, the original URL produces a "couldn't find this page on the site" type message. Is that correct/expected behaviour? Regards. DH85868993 (talk) 06:55, 14 September 2016 (UTC)

It takes time before the bot sees the URL as dead.—cyberpowerChat:Online 12:42, 14 September 2016 (UTC)

memoire-du-cyclisme.net feedback

It doesn't look like it's noticed memoire-du-cyclisme.net is dead with 1938 Tour de France [12] and 1930 Tour de France [13]. BaldBoris 19:42, 13 September 2016 (UTC)

That URL isn't part of that domain.—cyberpowerChat:Limited Access 19:44, 13 September 2016 (UTC)
I'm not referring to what what was picked up, but what wasn't. Last week you said you marked the memoire-du-cyclisme.net domain as down. Both these IABot edits from today haven't noticed the memoire-du-cyclisme.net URLs. These are the only two articles on my watchlist containing memoire-du-cyclisme.net that IABot has gone through. Forgive me if you haven't included it in the latest stable version yet. BaldBoris 19:53, 13 September 2016 (UTC)
It looks like someone tagged it as a paywall somewhere, and the bot picked up on that. When tagged as a paywall, it ignores the status of the entire domain, and takes no action on that URL until it's actually tagged as dead. I've reset it again, in case it was a fluke.
Still isn't catching them. [14] [15] [16] The site's new domian (.eu) put a paywall for the Tour de France stuff, but the old domain (.net) still has every accessible from before 2012 using IA. Nothing wrong with that is there? BaldBoris 01:48, 14 September 2016 (UTC)
Example of their 1959 Tour page before [17], and now [18]. BaldBoris 01:53, 14 September 2016 (UTC)
For 1973 Tour de France it picked up a memoire-du-cyclisme.net URL but not the main Tour page (ref 2). It's on most Tour de France articles and is used throughout the articles, so really need to sort this out or I'll ust do it manually. Thanks for the help. BaldBoris 17:29, 14 September 2016 (UTC)

InternetArchiveBot setting deadurl=yes when it isn't

Hello. It says on the FAQ that such instances should be notified to you to reset the status, so here I am. With this edit earlier today, the bot added an archive link for two pages from the rsssf.com website and set the deadurl param to yes, but they're both live now. cheers, Struway2 (talk) 13:48, 14 September 2016 (UTC)

I've reset the status, but the site appears to be unstable. The last dead check was this morning, and it came back as dead.—cyberpowerChat:Online 13:59, 14 September 2016 (UTC)
Another one a few minutes ago: here. The origial url works for me. cheers, Struway2 (talk) 15:05, 14 September 2016 (UTC)
Sorry to butt in, but that site doesn't look reliable in the slightest. BaldBoris 17:33, 14 September 2016 (UTC)
The fa-cupfinals.co.uk one? probably isn't, not something I'd use myself but something on my watchlist does. cheers, Struway2 (talk) 17:37, 14 September 2016 (UTC)

Incorrect report

I received a warning about this edit, where I removed an AFD notice from a page. I removed the notice because draft space articles belong in MFD. So the removal was valid. Cheers and thanks for all the good work this bot does.--Adam in MO Talk 17:06, 17 September 2016 (UTC)

Cyberbot should clear X13..X20

Hi Cyberpower678, I'm following up again about {{X13}} .. {{X20}} and their talk pages. It seems that the bot is still unaware of these the relatively new template sandboxes, and I'm thinking it makes sense for the bot to reset them at some point. — Andy W. (talk ·ctb) 00:16, 18 September 2016 (UTC)

Bot CONFUSES being blocked with dead URL

Note that this bot is blocked to access webpages from the Minor Planet Center. E.g. this page.. see revert. I have already posted this fact before on this very page here. For the last few weeks, I have spent hours reverting these unhelpful bots edits and flagged the corresponding post on the talk-page with FAILED. This has to stop. Rfassbind – talk 20:46, 15 September 2016 (UTC)

Also this URL

is being forwarded to:

it is detrimental to label it as dead link and add an unhelpful, completely outdated archive link. The HTTP Status Code for the erroneously considered dead URI is a 302 found. Rfassbind – talk 00:30, 18 September 2016 (UTC)

Cyberbot I

Hello friend! It looks like Cyberbot I has taken a break from clerking WP:RFPP? Perhaps you could kick it back into order? Best MusikAnimal talk 04:15, 23 September 2016 (UTC)

 Fixed, it was a markup error in one of the pagelink templates. tutterMouse (talk) 13:01, 23 September 2016 (UTC)

I undid this edit by User:InternetArchiveBot since both of the links it marked as dead (i.e. http://www.physics.sfasu.edu/astro/asteroids/sizemagnitude.html and http://www.gps.caltech.edu/~mbrown/dps.html) appear to be live links, not dead links, as they both function fine for me. Just in case this is indicative of some kind of bug in InternetArchiveBot and not some sort of transitory internet connection issue for the bot, I'm leaving this bug report. —RP88 (talk) 01:33, 25 September 2016 (UTC)

Incorrect report

I received a warning about this edit, where I removed an AFD notice from a page. I removed the notice because draft space articles belong in MFD. So the removal was valid. Cheers and thanks for all the good work this bot does.--Adam in MO Talk 17:06, 17 September 2016 (UTC)

Cyberbot should clear X13..X20

Hi Cyberpower678, I'm following up again about {{X13}} .. {{X20}} and their talk pages. It seems that the bot is still unaware of these the relatively new template sandboxes, and I'm thinking it makes sense for the bot to reset them at some point. — Andy W. (talk ·ctb) 00:16, 18 September 2016 (UTC)

Bot CONFUSES being blocked with dead URL

Note that this bot is blocked to access webpages from the Minor Planet Center. E.g. this page.. see revert. I have already posted this fact before on this very page here. For the last few weeks, I have spent hours reverting these unhelpful bots edits and flagged the corresponding post on the talk-page with FAILED. This has to stop. Rfassbind – talk 20:46, 15 September 2016 (UTC)

Also this URL

is being forwarded to:

it is detrimental to label it as dead link and add an unhelpful, completely outdated archive link. The HTTP Status Code for the erroneously considered dead URI is a 302 found. Rfassbind – talk 00:30, 18 September 2016 (UTC)

Another dead website

I assume, youbot doesn't go trough all websites, right, but only on those, which are in your database? If so (and this post is useful), you can add *.london2012.com to list. If this wasn't useful post of mine (and bot archives really everything it can), say so :) --Edgars2007 (talk/contribs) 19:10, 24 September 2016 (UTC)

It checks every link it encounters on Wikipedia.—cyberpowerChat:Offline 22:18, 24 September 2016 (UTC)
So that you don't think I'm an idiot, I had some small doubts because of some topics on WP:BOTREQ page regarding dead links. --Edgars2007 (talk/contribs) 09:02, 25 September 2016 (UTC)

Megatokyo bot edits

In the edit your bot made (diff link) three Alexa.com links got tagged as dead but where working when I checked. I should also let you know (if you don't already) that the Wayback Machine has had intermittent failures lately. Your bot should avoid substituting "https" for "http" within URLs or adding the protocol number ":80" (the Wayback Machine has done this) as another bot GreenC bot (talk · contribs) cleans up such links as it did with the follow up edit. Regards. – Allen4names (contributions) 11:42, 25 September 2016 (UTC)

GreenC bot here. There were about 480 links that had http converted to https (port 443) even though they had a :80 in the URL. Those links are now fixed (Hong Kong)), and the bug in IABot was also fixed. (Cyberpower if you want an SQL for those 480 let me know.) I agree removing :80 reduces complexity in the system. However in the future I may stop removing them as it creates "noise" (lots of small edits) which has its own problems. Maybe we could have an RfC to gauge community consensus, if any. -- GreenC 13:38, 25 September 2016 (UTC)

IABot Question

When IABot isn't provided the accessdate, what happens? Also does the IABot add archive urls to urls that aren't necessarily dead? --Tim1357 talk|poke 05:57, 25 September 2016 (UTC)

The reason I ask is because I've built a database of *inferred* access dates for urls based on the first time they are added to an article in the article's history. I was wondering if you were interested in incorporating that information into IABot. Tim1357 talk|poke 19:49, 25 September 2016 (UTC)
When IABot isn't provided with an access date, it does a binary sweep of the page history, to determine the initial time and date it was added, since it can be reasonably assumed that the source was added to the article the same day it was accessed in most cases. It can also add archive URLs to non-dead links, but it's currently set to not to.—cyberpowerChat:Offline 23:18, 25 September 2016 (UTC)

InternetArchiveBot

I was unhappy with this edit by the InternetArchiveBot.

  1. There was nothing wrong with the original
  2. The bot messed up the page formatting by removing the space before the {{
  3. The accessdate is in an invalid format; bots must comply with the {{use dmy dates}} template
  4. "bot: unknown" is an undocumented value for the deadurl parameter
  5. "df" is an undocumented value for the cite web template

I have reverted the edit, and tagged the article with a {{bots|deny=InternetArchiveBot}} template Hawkeye7 (talk) 22:18, 26 September 2016 (UTC)

Tagging the page with the Bonita template was excessively premature. You could have simply appended that source with {{cbignore}}.—cyberpowerChat:Offline 22:32, 26 September 2016 (UTC)
  1. The bot converts all stray archive templates to citation templates. An update in the works, v1.3, will make this conversion exclusive to references only.
  2. That's clearly a bug and needs to be fixed, not suppressed.
  3. IABot does comply and sets the format through the df parameter, which renders in the appropriate format
  4. That value was specificially designed for IABot.
  5. It may not be documented, but it certainly exists and works.
Cheers.—cyberpowerChat:Offline 22:50, 26 September 2016 (UTC)
  1. I'm not sure why you want to do that
  2. okay
  3. As you can see, this not not render the date correctly
  4. What does it do?
  5. As you can see, this not not render the date correctly
  6. The {{cbignore}} documentation was unclear as to whether it would work in this case.

Hawkeye7 (talk) 23:03, 26 September 2016 (UTC)

  1. It helps with the transparency of sourcing. By converting to a cite template, it allows for the presence of an access date, which the bot can reasonably extrapolate from, and put into the citation template. Also for the WebCite templates, which are mostly populated with the short form snapshot URLs, it allows the original URL to be present and visible, should the archive ever go down.
  2. (blank)
  3. I was just recently informed that I need to use the dmy-all value and not just dmy to get all dates to render correctly, so this is a bug that needs fixing.
  4. It mimics the deadurl=unfit, with some minor differences. @Trappist the monk: can enlighten you on that. This value suppresses the rendering and linking of the original URL. deadurl=yes links to the archive, but also links to the original.
  5. Same as 3.
  6. {{cbignore}} works on anything the bot touches. It's parsing engine is quite intelligent. It's designed to mimic human parsing as best as possible.
Cheers.—cyberpowerChat:Offline 23:16, 26 September 2016 (UTC)
@Hawkeye7:
  1. Because |dead-url=bot: unknown is not intended for use by regular editors, it is only vaguely documented. It functions the same way as |dead-url=unfit, used previously by this bot, except that it is semantically different and has its own maintenance category: Category:CS1 maint: BOT: original-url status unknown. |dead-url=unfit (or |dead-url=usurped) is intended to be used by editors who find an original url that may once have supported an article's text but now has been taken over for the purposes of advertising, porn, malicious scripts, or whatever. When these kinds of urls are found, |dead-url=unfit causes Module:Citation/CS1 to hide that url while at the same time retaining it for historical purposes. This bot was using |dead-url=unfit which has contaminated Category:CS1 maint: Unfit url so now we don't know which of the 9k-ish original urls in that category are unfit and which the bot knows to be live but doesn't know if they support their respective articles.
  2. Yeah, cs1|2 documentation generally sucks. But, in this case you are wrong. |df= is documented for all cs1|2 templates that use {{csdoc}}. See Template:Cite web#Date.
Trappist the monk (talk) 00:08, 27 September 2016 (UTC)

There's a problem with this edit by the bot: For the access date, it inserted the date the archive was captured, not the (presumably) current date when the bot accessed the page. This date precedes the founding of Wikipedia, which correctly caused it to be flagged as an error. --Floatjon (talk) 15:58, 29 September 2016 (UTC)

How weird. It might be using the archive snapshot to extrapolate the access time. Shouldn't be doing that though.—cyberpowerChat:Online 11:50, 30 September 2016 (UTC)

Cyberbot II

I can't believe no one hasn't already asked you this but fer chrissakes adjust the bot so it stops spamming talk pages. The edit summary in the pages' histories is more than enough notice that the external links have been modified. Such notification is in fact precisely what they are there for. The talk page is for actual talk about the improvement of the article which "double checking bot edits" doesn't actually fall under. The only pages where anyone ever follows the bot's instructions, verifies the link, and makes a mark are well-curated pages where they would have checked the new links anyway.

At minimum—if there is some policy that requires such notifications—make them as minimal as possible, as a single text line without images: I have just added archive links to one external link on X. Please take a moment to review my edit. If necessary, add after the link to keep me from modifying it. Alternatively, you can add to keep me off the page altogether.. Right now, it's an eyesore. Nothing else being written is important, creating a category with 180,000+ entries neither you nor anyone else is whittling down is not helping anyone, and it is a needless piece of makework to require other people delete them by hand or craft other bots to follow around cleaning up after you. — LlywelynII 02:42, 27 September 2016 (UTC)

Get a consensus if you want it changed or shut down. This bot has been doing this for over a year now, and is approved to do so.—cyberpowerChat:Offline 02:44, 27 September 2016 (UTC)
Hello C. As you know I worked on these for several months. Eventually I got burnt out on them though I do still do a few from time to time. They are worth checking due to the fact that some of the updates do not work. I have wondered if some kind of edit-a-thon might be organized to reduce the number in the category. I have no idea how those work but I thought I would mention it to see what you think. Cheers. MarnetteD|Talk 16:08, 29 September 2016 (UTC)
I also have been checking the links and consider the talk page message a useful prompt for doing so. I've gone to the trouble of listing failed links under a separate {{sourcecheck}} with |checked=failed, thinking that it might be useful feedback. I think myself a little unusual in going to such trouble. On even my most active articles, I have little competition to complete this task. I neglect to change archived messages on pages I'm new to, even if I check the link, since changing archived material is generally discouraged. The failed links that go undetected should be either relisted or reworked with time, especially since the bot seems to work on automatic pilot in checking for failed links. Dhtwiki (talk) 23:38, 29 September 2016 (UTC)
I'm working on the new interface as we speak, so this source check template can be removed. The first interface that I'm working on is the ability to reset dead links falsely tagged as dead, that way, editors can easily do it themselves instead of having come to me.—cyberpowerChat:Online 12:16, 30 September 2016 (UTC)

I get 404 errors

on the 2 Wayback sources you added here. a couple of hours ago (I think). Could you please take a look? Thanks, --Hordaland (talk) 04:32, 1 October 2016 (UTC)

Both work for me.—cyberpowerChat:Online 12:20, 1 October 2016 (UTC)

official website template

Thinking about {{official website}} (aka {{official}}), see this edit for example. This is a complex template as it gets link data from Wikidata and compares it with the source link, and if they don't match it considers it an error and adds to a tracking category. In this example that's what happened. I can see three possible solutions:

  1. Create new arguments for the official template archiveurl and archivedate. I started a discussion at Template_talk:Official_website but no response from anyone.
  2. Leave official in place, and add {{wayback}} to the end eg:
    *{{Official|example.com}} {{webarchive |url=https://web.archive.org/web/*/example.com |date=* }}
  3. Replace official with wayback eg.
    *{{webarchive |url=https://web.archive.org/web/*/example.com |date=* }}
    . In this case it would also have to transfer any |name= argument to wayback's |title=.

Each method has pros and cons. Official is a widely used template with 162192 transclusion, more than double the number of wayback templates. Probably the least disruptive would be #2 because it doesn't touch the original official, though it presents problems if the archive isn't templated like wayback or webcite. #3 would be the best performance since it reduces number of templates. #1 would be smooth but increases complexity of that template code and unclear there is consensus for it. And it will be up to you what is better solution for the bot. -- GreenC 13:32, 30 September 2016 (UTC)

Per the above section, I don't know what would be the best yet.—cyberpowerChat:Online 12:22, 1 October 2016 (UTC)

Nested templates

This edit nested a dead links template inside another template creating an error on render. -- GreenC 13:08, 30 September 2016 (UTC)

The problem is, IABot can't recognize the {{official}} template, because it is neither an archive template or a cite template, nor does it share any cite template functionality. So it just sees the link inside it.—cyberpowerChat:Online 12:21, 1 October 2016 (UTC)
Yeah I can definitely sympathize with the complication it causes. It applies to all thousands of templates. One idea is a function that finds the character start/end position of every template and if the url position is within one of those ranges it's inside a template. The function would need to deal with nested templates itself somehow .. maybe a list of all start positions, all end positions, and algorithm to determine when a nest occurs and adjust ranges accordingly. Don't ask me how :) -- GreenC 14:51, 1 October 2016 (UTC)
The parser itself is very complex, and requires a delicate touch when enhancing it. I think the simplest solution is checking whether there are curly brackets surrounding the URL and then grab the template.—cyberpowerChat:Online 16:18, 1 October 2016 (UTC)

Bot didn't notice dead URL?

Hi! In this edit, InternetArchiveBot added "deadurl=no" and an empty "df" parameter. First of all, the original URL was dead when I tested it ("server not found"), so the parameter should have been "deadurl=yes". Second, why the empty date-format parameter? Is that supposed to be filled in later? Thanks. — Gorthian (talk) 00:52, 2 October 2016 (UTC)

df is left there in case the bot used the wrong formatting, it can easily be fixed for IABot and other bots by setting that parameter to avoid the problem in the future with date formats. If IABot knows what the format is supposed to be, it will fill it in itself. As for the dead state, IABot has yet to recognize it as dead. The site needs to fail 3 separate checks each spaced out at least 3 days, for it to be considered dead. According to the DB, the entry for that URL has recorded one failed check made today. It needs 2 more to be considered dead. This is to prevent catching intermittent site failures that are only down temporarily.—cyberpowerChat:Offline 00:58, 2 October 2016 (UTC)
Wow, how thorough! Thanks for your answers. — Gorthian (talk) 05:30, 2 October 2016 (UTC)

I'm fair too easily amused

I just noticed on WP:RFPP that if a request is not [signed Cyberbot I will make an edit saying it can't be parsed. Scroll down and there is a error message caused by the lack of signature in the original request. This causes the bot to return and tell itself that the previous bot message can't be [parsed. Don't know if you have some way to suppress the error message in the first bot comment or stop the second. Cheers. CambridgeBayWeather, Uqaqtuq (talk), Sunasuttuq 09:25, 2 October 2016 (UTC)