Wikipedia:Bots/Requests for approval/Coreva-Bot 2
- The following discussion is an archived debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA. The result of the discussion was Request Expired.
This is a reopened bot request - please see the bottom of this page for the most recent information |
Operator: Excirial (Contact me,Contribs) 18:42, 7 January 2009 (UTC)[reply]
Automatic or Manually Assisted: Fully automatic, with the possibility to manually override the bots behavior if desired.
Programming Language(s): VB.net,
Function Summary:
- Query Wikipedia API every X minutes (Currently: 30 minutes) for new pages
- If bot is cold started, fetch newpagelist with the last X (Idea: 500-1000) pages. (See: Note 1)
- If the bot is running, only fetch the list of new pages since the last visit.
- If the bot has found any new pages, load the page content and start to parse it.
- Bot will parse the content to determine if any maintenance tags have to be placed.
- If there is a need to place a maintenance tag, add the tag to the article, and resume with the next article.
Edit period(s) (e.g. Continuous, daily, one time run): Continuous
Edit rate requested: 1 edit per new page tops. (Estimated 10 edits a minute tops, currently a test setting that is open to be lowered.)
Already has a bot flag (Y/N): (Not applicable, new bot)
Extended content
| ||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Function Details: Coreva's main task is placing maintenance tags on new pages that require them, similar to the way most newpagepatrol's work their beat. Coreva's will regularly(every 5-10 min) check the newpage list for new article's, fetch the new article's content, parse the content (See: Parser Table) and finally update the article, adding required maintenance tags. Just like the previous Coreva, this one should also be quite light on server resources. The bot queries the server's new page list every 5-10 minutes, and (So far) each article re quire's two server queries (getting the article's content, and a query to check if the article is an orphan). Category counts, link counts et cetera are handled internally by the bot. Additionally, the bot will require one database write to add the template's (In case this is required). The estimated edit rate for the bot will be 2 edits per minute on average. (See: Note 2) Coreva is not a miracle, and will never replace a living newpage patrol. Coreva cannot patrol for WP:CSD and does not understand hoaxes, advertising or vandalism. However, a lot of article's slip of the newpage list without having any form of maintenance tags. About half the pages on the newpagelist show as not being patrolled, and even though this is a very rough guess, this equals more then 2.000 pages a day. (See: Note 3) Since adding maintenance tags is thoroughly boring work, i think Coreva could spare quite a few patrols a bit of boredom :).(Unlike CSD tags which require at least some form of using your brain, maintenance tags require nothing more then checking 20 indicators, most of them nothing more then: Present/Not present) Finally, just like the old Coreva, its still pretty much work in progress, which is only done in spare time. While the progress on this Coreva is much faster then on the previous one, i assume it will still take a few months before it is capable of being a fully automated bot. Even if it would be technically capable to do so, it will not be a fully automatic bot until i tested it thoroughly (few weeks i guess) in assist mode, which means Coreva would only me feedback on what tag it would place on every page it checks. This way any annoying mistakes in the parser should be ironed out, while at the same time it allows to improve the parser code. Parser Table[edit]This table gives an overview of the templates Coreva will be placing on the articles, along with the current criteria configuration for doing so. Note that this is still pretty much in beta stage; templates may be added and removed depending on tests. Also, the criteria are still based on very simple algorithm's. Coreva's tests are conducted on a very small and varied set of locally stored articles, thus criteria are still general. In their current form they should, however, produce very little false positives (But would likely have quite a few false negatives). So all in all: Work in progress! (See: Note 4)
Notes[edit]
Discussion[edit]
Reopening request[edit]Over the past two ish months the amount of time i could spend on Wikipedia was drastically reduced due to other duties, causing a certain lapse in coreva's development. Another issue halting development progress was caused by an old programmers trap: Building a patched together prototype which should be trown away once i had a proof of concept it actually worked, and instead keeping the prototype and resuming work on it, which eventually let to a horrible code mess and a completely non understandable program. In the past month i finally found the time and willpower to use a step trough debugger throughout the entire program to decipher and salvage the mess as much as possible, before rewriting coreva from scratch, sans for a few salvaged functions that actually worked. The actual working of the bot have changed very little from the table i added above - i dropped the STUB, TOMANYCATS and TOMANYLINKS due to them being prone to false positive. I am currently testing a module that can detect peacock pages (Based upon statical analysis, weighted word lists andsome basic calculations); So far it work fine when comparing featured article's versus peacock articles (1 false positives on 270 correct tags), but the calculation algorithm makes to many mistakes on small articles, so its disabled for now. Et Cetera[edit]
Excirial (Contact me,Contribs) 21:05, 11 June 2009 (UTC)[reply]
Approved for trial (10 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. This is a very long RfBA, and the specs have changed throughout and are difficult to follow. I think the best way for all parties to understand what this bot would do is to give it a very small trial. – Quadell (talk) 13:12, 18 June 2009 (UTC)[reply]
Approved for trial (20 edits). Please provide a link to the relevant contributions and/or diffs when the trial is complete. Okay, let's have another go. – Quadell (talk) 22:38, 22 June 2009 (UTC)[reply]
|
Quick status summary
[edit]Since this RFBA is quite old, it contains a lot of information which is no longer completely up to date. Besides, it has become so long that it is somewhat unreadable, thus here is a summary for quick reference.
General
What will Coreva-Bot's task be?': Coreva-Bot will function as a newpage patrol, checking article's for problems. Once it has found an issue it will add the appropriate maintenance templates to the articles.
How will Coreva operate? If coreva is started the first time - that is, its database backend is empty - it will query the server for the last 500 new pages list and save that list to the backend; If coreva already has data in its back end it will query the server for all pages created since it last ran (5000 limit, 500 for now as it is still not marked as a bot). Coreva will then load pages and check pages, filling its save buffer. The speed at which pages are checked depends on the amount of pages in the buffer - more pages means longer intervals. Every 6 seconds the buffer will be checked if there are pages to save - in case they are the oldest page will be saved with templates added.
Tagging Article's
What will Coreva-Bot template for?: {{Uncategorised}}, {{Unreferenced}}, {{Footnotes}}, {{Wikify}}, {{Orphan}}, {{Sections}}, {{internallinks}}. Statical analysis shows that the {{peacock}} template is prone to errors, which is why it is disabled indefinitely.
What restrictions apply for tagging: Coreva will not template any pages marked as CSD - but it will template PROD and AFD pages. Coreva will not tag removed pages. Coreva will not tag pages marked as Disambiguations (Includes the basic disambig template, all aliases and specialized disambiguate templates such as {{tl:hndis}}), It will not tag pages twice with the same template, in case maintenance templates already exist,
What are the criteria for each template to be added?: (Note: These criteria are constantly improved - Do note that they only grow stricter trough). Templates will not be added if one is already present.
- Uncategorized: The article has 0 categories - Note that any category, including maintenance categories, count to this limit.
- Unreferenced: The article contains no, or an empty reference header and no <ref> tags.
- Footnotes: The article contains a reference header with any non whitespace content, and no <ref> tags. Also, the templates {{1911}} and {{JewishEncyclopedia}} must not be present.
- Wikify: The amount of internal links is 0.
- Orphan: The article has no other article's linking to it.
- Sections: Exponential mathematical formula
- Internallinks: The article has less internal links then one for every 1000 characters. Note that, while being a rather unsophisticated filter, this works pretty well.
Technical and operational limits
- Article's younger then an hour are not checked - instead the bot goes to sleep mode until it is allowed to tag again.
- Coreva tracks pages tagged - unless manually reset it will not tag the same page twice.
- Edit rate will never exceed 10 edits per minute; Mostly the bot will be around 7 or 8 ish edits, depending on the amount of pages in its buffer.
- The bot will query the server once on startup, and then again once every 30 minutes for new pages. Each page checked requires two queries: One for the article content, and once to check if the article is an orphan. In case the article needs to be updates the bot will save the page once for every required article.
Todo
Coreva is quite near being "finished", at least the integral part of it. Due to the amount of templates the bot handles its filters will likely be constantly tweaked to reflect new templates or guidelines. In the future i might submit another feature request that in case Coreva runs out of new pages, it will check trough older pages at snale speed. Other then this the only thing that remains is some work on the GUI and efficiency of certain sections - none of which should change it controversially.
Re-Opened (Yet again *Sigh*)
[edit]Due to some unforeseen circumstances i have been almost completely inactive the last 3 or so months, causing this bot request to expire yet again. Finally having found some spare time to work on this bot again, i would like to reopen this RFBA.
As for the current status: Bug number 4 is now solved, Coreva will only add the footnotes template to pages of substantial length. It will also converts ampersands and other reserved HTML characters correctly now before saving the page, and I also updated the regex's used to determine if a template should be placed; thus reducing the amount of false positives. Excirial (Contact me,Contribs) 22:11, 30 October 2009 (UTC)[reply]
- The intention is to tag 6 minute old articles with maintenance templates? Why? Is there community consensus that an editor should have only 5 minutes to write before a bot tags the article? What might be missing, imo, is a few more minutes to write an article.
- Personally, I'd let the bot finish it if my editing was interfered with in this manner. It takes hours to write an article. Sometimes I post a stub first. I'd like to see the community consensus for these tasks, for the templates to be added by a bot, and for the amount of time before adding the templates. It seems hostile if I understand the time frame correctly.
- Also, how many templates will it add? It seems to say it will only add one, but which one of the many? Or will it add more? --69.226.106.109 (talk) 02:41, 31 October 2009 (UTC)[reply]
- Im not certain where you got the 6 minutes part, as Coreva is hard coded not to check any article's younger then an hour - If it runs into article's younger then an hour it will automatically disengage from tagging them until they are the required age. In that time the bot could very slowly iterate trough wikipedia's older articles to see if they have any issues - though for now it just halts itself until it is allowed to tag again.
- You said you'd check for new pages every 5-10 minutes, so I guessed 6 minutes after the new page appeared it could have a tag on it. Is an hour a time that the community considers reasonable?
- (See below)
- You said you'd check for new pages every 5-10 minutes, so I guessed 6 minutes after the new page appeared it could have a tag on it. Is an hour a time that the community considers reasonable?
- The "Minimal time" part is of course easily changeable to a longer or shorter duration (It used to be 30 minutes actually), but in this case i chose for an hour so that any new contributer still has a chance to see them - and thus receives some input on how to improve this article. Keep in mind that new page patrols using FRIENDLY or similar software exhibit the same behavior as the bot - only faster. For example this article was tagged within 15 minutes and this one was tagged within 40 minutes. Note that these are just two random article's i angled up; I have seen plenty being tagged within 10 minutes. Similarly quite a few are left completely not tagged while they still need quite some work.
- I don't think it works that way, and it's hard to follow the reasoning behind, well, human editors do this and it's worse so than what the bot will be doing...
- Due to the way patrol tools work article's tend to get tagged sooner rather then later as article's are mostly processed on a near real time speed. During the development i tended to mimic already present tools and procedures as much as possible as those are obviously legal to use within the guidelines. Coreva has the added advantage it can simply query the API to receive a list of recent changes, so from that perspective it matters little if the wait time is an hour, a day or a week. As far as i know there is no community consensus regarding tag time with maintenance templates - if there is please tell me. It takes rather short since it only means adjusting a single number.
- How about finding out about some reasonable length of time by getting some feedback from the community? An hour seems reasonable to me, unless someone is still working on the article right then. I created an article of average difficulty from one of the lists of missing articles to see how long I usually work on it before I would leave it for a while, Chaetopterus and an hour seems okay, because I usually add more sources to my articles than most editors. But I would feel more comfortable about the timing if it were in lines with voiced community guidelines. I do appreciate that you considered how users usually go about it. --69.225.3.198 (talk) 21:36, 2 November 2009 (UTC)[reply]
- Certainly, it is always good if a bot has some form of community consent, and i will inform tomorrow at the village pump what users think a reasonable time would be. As said before my timing was mostly based upon given editors some time, while at the same time allowing new users to receive some feedback. However, seeing you raised the issue that a bot tagging halfway can be annoying im more then happy to change that - Personally i always work in user space unless its a small stub i can just create in minutes.
- Yes, I think asking the community is good for what would be a reasonable time for a bot tagging new articles.--69.225.3.198 (talk) 23:29, 2 November 2009 (UTC)[reply]
- Asked here. Feel free to comment if you are interested in it. :) Excirial (Contact me,Contribs)
- Yes, I think asking the community is good for what would be a reasonable time for a bot tagging new articles.--69.225.3.198 (talk) 23:29, 2 November 2009 (UTC)[reply]
- Certainly, it is always good if a bot has some form of community consent, and i will inform tomorrow at the village pump what users think a reasonable time would be. As said before my timing was mostly based upon given editors some time, while at the same time allowing new users to receive some feedback. However, seeing you raised the issue that a bot tagging halfway can be annoying im more then happy to change that - Personally i always work in user space unless its a small stub i can just create in minutes.
- How about finding out about some reasonable length of time by getting some feedback from the community? An hour seems reasonable to me, unless someone is still working on the article right then. I created an article of average difficulty from one of the lists of missing articles to see how long I usually work on it before I would leave it for a while, Chaetopterus and an hour seems okay, because I usually add more sources to my articles than most editors. But I would feel more comfortable about the timing if it were in lines with voiced community guidelines. I do appreciate that you considered how users usually go about it. --69.225.3.198 (talk) 21:36, 2 November 2009 (UTC)[reply]
- Due to the way patrol tools work article's tend to get tagged sooner rather then later as article's are mostly processed on a near real time speed. During the development i tended to mimic already present tools and procedures as much as possible as those are obviously legal to use within the guidelines. Coreva has the added advantage it can simply query the API to receive a list of recent changes, so from that perspective it matters little if the wait time is an hour, a day or a week. As far as i know there is no community consensus regarding tag time with maintenance templates - if there is please tell me. It takes rather short since it only means adjusting a single number.
- I don't think it works that way, and it's hard to follow the reasoning behind, well, human editors do this and it's worse so than what the bot will be doing...
- As for the templates, Coreva will add one for every issue it detected. However, when multiple issues are found it will mimic WP:Friendly and add the grouped {{articleissues}} template instead. Last, Coreva will only check an article once - after that it will not check it again unless i manually reset the bot. I this optic it is not that different from a new page patrol, who might tag your article with maintenance templates as well. Neither Coreva, nor patrols are mindreaders, which means both do not know if you intend to continue work on an article later on. In both cases removing the templates in an edit you were already making is sufficient to keep the tags off. Excirial (Contact me,Contribs) 10:52, 31 October 2009 (UTC)[reply]
- So, if a user writes a single line stub on an organism it could essentially be tagged with so many templates in an hour after it has been written that the reader cannot find the text in the article? IMO this is the equivalent of a speedy deletion, if you make it impossible to read the article by obscuring the text with tags? --69.225.3.198 (talk) 16:15, 2 November 2009 (UTC)[reply]
- The only thing Coreva usually signals a well written stub article for is the lack of references; Diego de Miguel for example came trough without any tag at all. Of course what you mention is possible; On the other side Domohani Kelejora High School was tagged with three templates but this was because the article was plain text without any wiki formatting at all, which meant it really needed work done. Excirial (Contact me,Contribs) 18:16, 2 November 2009 (UTC)[reply]
- So, what's a well-written stub? --69.225.3.198 (talk) 21:36, 2 November 2009 (UTC)[reply]
- I wrote a quick tool this evening based upon Coreva, which allows me to evaluate any article within seconds, while giving feedback what Coreva would have done if it encountered it (And i tell you, its a blessing as it is more versatile then Coreva in its analysis, meaning that i can easily test and improve the detection algorithm).
- So, what's a well-written stub? --69.225.3.198 (talk) 21:36, 2 November 2009 (UTC)[reply]
- The only thing Coreva usually signals a well written stub article for is the lack of references; Diego de Miguel for example came trough without any tag at all. Of course what you mention is possible; On the other side Domohani Kelejora High School was tagged with three templates but this was because the article was plain text without any wiki formatting at all, which meant it really needed work done. Excirial (Contact me,Contribs) 18:16, 2 November 2009 (UTC)[reply]
- So, if a user writes a single line stub on an organism it could essentially be tagged with so many templates in an hour after it has been written that the reader cannot find the text in the article? IMO this is the equivalent of a speedy deletion, if you make it impossible to read the article by obscuring the text with tags? --69.225.3.198 (talk) 16:15, 2 November 2009 (UTC)[reply]
- Im not certain where you got the 6 minutes part, as Coreva is hard coded not to check any article's younger then an hour - If it runs into article's younger then an hour it will automatically disengage from tagging them until they are the required age. In that time the bot could very slowly iterate trough wikipedia's older articles to see if they have any issues - though for now it just halts itself until it is allowed to tag again.
- Now, as for a well written stub: Your own Chaetopterus article would not have received any tag since its first revision. Also, pressing Special:Randoma while looking for stubs this were a few results: Aigües - unreferenced. Bērze parish - unreferenced. Paddy Forde - none. McCulley Township, Emmons County, North Dakota - none. Cigaritis - unreferences. These are of course older articles, so i took 5 successive new article's as well: Belarusian Independence Party - None. Infimenstrous - Ignored for CSD. Aventure en Australie (TV episode) - Uncategorised, Unreferences. The Reincarnation of Peter Proud (1973 novel) - Uncategorised, Unreferences, Orphan. Jonas Cutting Edward Kent House - Orphan.
- There was one false positive related to the sections template, which i traced back to a typo while coding the analysis tool, rather then in Coreva. Excirial (Contact me,Contribs) 23:09, 2 November 2009 (UTC)[reply]
- Use {{Article issues}} not {{Articlesissues}}. Rich Farmbrough, 19:21, 9 November 2009 (UTC).[reply]
From looking at these, I think I would like to have broader community consensus for the orphan tagging, and for the tagging in general. The time looks like it should be longer, say 3 hours during some periods, but this may be flexible. I don't know if the question you asked is sufficient for understanding the community's desire to tag in general. I am concerned, as I said, about adding tags to certain types of generally stubby articles. Many stubs about living things are just a single line and a taxobox, while Cigaritis would be a better article if referenced, and should be referenced, and its lack of references should be called to someone's attention, adding a no references banner across the top will overpower the text and essentially, imo, make the article useless to the reader. It might as well be deleted.
Can articles be categorized unreferenced without the huge banner, or can it be put on the bottom of the page? Where are these categories of unreferenced articles, by the way, I would like to add references to many of them. --69.225.3.198 (talk) 09:26, 4 November 2009 (UTC)[reply]
- On the "unreferenced" issue, the bot's stated mechanism doesn't seem nearly sophisticated enough. "The article contains no, or an empty reference header and no ref tags" misses many potential referencing techniques. Generally, I doubt the bot is going to be an effective way to process for this tag; when, for instance, there are raw links in the article, it will be difficult for the bot to differentiate between ones that are useful references and ones that are not. Christopher Parham (talk) 15:06, 4 November 2009 (UTC)[reply]
- (69.225.3.198) It is of course possible to add the category to the article without adding the "Visual" template, but i believe community consensus is against doing so because the requirements for improvement should be visible (If i remember a discussion some time ago correctly). The reasoning for this was that readers should be aware of the issues with the information they are presented. As for the category: it is located under WP:backlog, or more specifically under Category:Articles lacking sources. Currently just 188,583 are tagged, so by tomorrow you could be done with the backlog :P.
- (Christopher Parham) Which is why im constantly busy improving coreva's detection algorithms. The majority of the article's either has no references or references which are added correctly as stated in WP:MOS. There are indeed other techniques such as linking websites within the middle of the text (Either with an external link or just textual), dumping them all at the bottom without a section header or ref tags, and i can go on for a while with these.
- Most of these can however, be reliably detected. A regular expression can easily filter websites out of the article, even if they are not marked as an external link. Seeing these kind of pages are slightly rare i do not have the amount of test subject i normally like, but i was considering marking pages with multiple external links in the text for cleanup. Alternatively it is possible to ignore article's which seem to have links. This would certainly give false negatives, but it would still tag plenty of article's correctly. Currently a substation part of the article's end up being completely untagged in the first place, so it would already improve the situation, even if it does not solve it. Excirial (Contact me,Contribs) 16:34, 4 November 2009 (UTC)[reply]
- You should have a look at Erik9Bot's BRFA to see some more ways articles can contain references that aren't immediately apparent. Also \([^)]* p+\. is a good string to look for. Rich Farmbrough, 19:21, 9 November 2009 (UTC).[reply]
- Most of these can however, be reliably detected. A regular expression can easily filter websites out of the article, even if they are not marked as an external link. Seeing these kind of pages are slightly rare i do not have the amount of test subject i normally like, but i was considering marking pages with multiple external links in the text for cleanup. Alternatively it is possible to ignore article's which seem to have links. This would certainly give false negatives, but it would still tag plenty of article's correctly. Currently a substation part of the article's end up being completely untagged in the first place, so it would already improve the situation, even if it does not solve it. Excirial (Contact me,Contribs) 16:34, 4 November 2009 (UTC)[reply]
- That is indeed quite the handy RFBA. Im glad to see that Coreva covers most of the points it mentions, but there are a few things that Coreva doesn't do, or at most does differently. It seems that the mentioned bot accepts any form of link starting with http:// as a reference, regardless of where the link leads. Perhaps A valid strategy as it is quite difficult to have a false positive this way (Though false negatives would likely increase). Searching for ISBN is something i certainly have to add, similarly with "List of" / "Lists of" check, but this is something i was already planning to add.
- If anything i would rather not be forced to create a separate hidden category in which Coreva lists possibly unreferenced articles. If that would be the case i think i prefer dropping the check for the unreferenced template as it doesn't justify the extra work implementing it would create. I will be integrating the suggestions from that RFBA soon, but for now i became a little sidetracked with the idea that i could use Coreva to track dead references as well. The last few days i mostly spend my time tinkering on a prototype that i could integrate with Coreva. Seeing Coreva will likely have quite some downtime due to the finite amount of article's it has to check, it seems that a second activity could fit neatly into that time. Excirial (Contact me,Contribs) 22:51, 9 November 2009 (UTC)[reply]
- * I would be dead against repeating what Erik9bot did. We have a hidden category with 100,000 + articles in it: I have seen people go through their "baliwicks" just hoiking it out.
- * In terms of the tag overpowering the article I have offered
- This article does not cite any sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "Coreva-Bot 2" – news · newspapers · books · scholar · JSTOR (Learn how and when to remove this message) - and this could be made smaller, used for orphaned too. Uncat is not a problem, that is one backlog that is under control.
- * there is a question in my mind about the usefulness of "orphan" anyway. I shall raise that at VP.
- Rich Farmbrough, 21:05, 18 November 2009 (UTC).[reply]
- I like it much better than the current one. Living thing stubs, though, aren't likely to be removed even with this tag, and, again, for one sentence and a taxobox it's still overpowering. Can it be put at the bottom of the article? I think it's better to have an article flagged in some way, by a banner like this for example, if it has no references, because encyclopedia articles, in general, should not be unreferenced. I'm just never sure who's fixing these unreferenced articles, or if the banners are just permanent parts of the articles. --IP69.226.103.13 (talk) 11:30, 19 November 2009 (UTC)[reply]
- If anything i would rather not be forced to create a separate hidden category in which Coreva lists possibly unreferenced articles. If that would be the case i think i prefer dropping the check for the unreferenced template as it doesn't justify the extra work implementing it would create. I will be integrating the suggestions from that RFBA soon, but for now i became a little sidetracked with the idea that i could use Coreva to track dead references as well. The last few days i mostly spend my time tinkering on a prototype that i could integrate with Coreva. Seeing Coreva will likely have quite some downtime due to the finite amount of article's it has to check, it seems that a second activity could fit neatly into that time. Excirial (Contact me,Contribs) 22:51, 9 November 2009 (UTC)[reply]
Regarding the {{Footnotes}} tag, what would the article do with new articles that use paranthetical references, and have a references section but do not use the <ref> tag? For instance take John Vanbrugh and assume the notes section (not related to referencing in this case) didn't exist; how would the bot approach this article? Christopher Parham (talk) 15:12, 7 December 2009 (UTC)[reply]
- A user has requested the attention of the operator. Once the operator has seen this message and replied, please deactivate this tag. (user notified) Seeing as Excirial is inactive since November, I will probably be closing this as expired in a few day, until such time as he returns. MBisanz talk 06:39, 27 December 2009 (UTC)[reply]
- Request Expired. MBisanz talk 01:28, 3 January 2010 (UTC)[reply]
- The above discussion is preserved as an archive of the debate. Please do not modify it. To request review of this BRFA, please start a new section at WT:BRFA.