Wikipedia:Featured article candidates/Parallel computing/archive1
- The following is an archived discussion of a featured article nomination. Please do not modify it. Subsequent comments should be made on the article's talk page or in Wikipedia talk:Featured article candidates. No further edits should be made to this page.
The article was not promoted 16:28, 18 November 2007.
Parallel computing is a core topic in computing. It's also a subject near and dear to my heart, since that's what I've spent 4 years in grad school learning about. I recently got tired of the really sorry shape the article was in, so I totally rewrote it from scratch (you can find the deleted revisions at user:Raul654/PC).
I've tried to make the article as accessible as possible to a non-expert, but it's a complicated subject area. For highly technical articles like this, the rule of thumb (as previously discussed here on the FAC) is that while obviously we would like it if the whole article were accessible to laymen, it's unfeasible to expect the entire article to be that way. The intro should be accessible, however, but beyond that it's not strictly necessary. Raul654 05:05, 7 November 2007 (UTC)[reply]
- Raul, I have some capitalization questions, and some MOS adjustments are needed (mostly caps, hyphens, dashes and some ref formatting). May I just go in and make those myself? There are caps I can fix myself, but some aren't clear. For example, I'm not clear what to do about Flynn's Taxonomy; the article title uses a capital T, but the article text does not (Flynn's taxonomy is a classification of computer architectures, ...); which is correct? If it doesn't have a cap T, the article needs to be moved to the correct name. Let me know if I can make the MOS fixes myself; they are mostly trivial, except for confusing cases like Flynn. I can leave inline HTML queries when I'm not sure. SandyGeorgia (Talk) 05:52, 7 November 2007 (UTC)[reply]
- Make any changes you feel are appropriate. If it's factually wrong, I'll let you know. I'm fine with decapitalizing the T. Raul654 05:55, 7 November 2007 (UTC)[reply]
- Could sentences such as this one... Despite these power issues, transistor densities are still doubling every 18 to 24 months per Moore's Law. Now that these transistors are not needed to facilitate frequency scaling, they can be used to add extra hardware to facilitate parallel computing. This is the basis for the current push towards multicore computing paradigm ... be written in a way that is more permanent, that is if read a year from now, this will be still accurate? I refer to the words "still doubling" and "current push", for example. ≈ jossi ≈ (talk) 06:07, 7 November 2007 (UTC)[reply]
- I've rewritten that section. I'm keeping "still doubling", because that's really the best way to say that some people think Moore's law is at an end, but this has not materialized (so far). Moore's law has been with us for 40 years, and I don't think it's going anywhere anytime soon. Raul654 15:02, 7 November 2007 (UTC)[reply]
- I hit a lot of the MOS concerns. I see someone else was removing some of the wikilinks from section headings, but there are still more. I would remove them myself but I can't find that guideline that I've seen a million times about no wikilinks in section headings because they break something in the software. Maybe someone else knows if something changed in the software or what became of that guideline, but wikilinks in section headings used to be a big no-no. SandyGeorgia (Talk) 07:12, 7 November 2007 (UTC)[reply]
- I do not recall a guideline on this issue, but agree that it is much better to have the wikilink in the body text, rather than in a section title. ≈ jossi ≈ (talk) 17:08, 7 November 2007 (UTC)[reply]
- Support - Article is well written, concise, and and the same time covering all notable aspects of the subject. The only change I would make would be to move the equations down the page a bit, providing first a textual description of the concept and its evolution. Think of the reader... I would also add a section with an alpha-sorted list of all the sources used. ≈ jossi ≈ (talk) 17:11, 7 November 2007 (UTC)[reply]
- I don't want to move the history section (currently at the bottom) because it includes a technical description of some older machines, and an understanding of those technical concepts (explained in the proceeding sections) is necessary to full understand that section. I don't have a problem adding an alphabetized list of sources used. Raul654 17:38, 7 November 2007 (UTC)[reply]
- I tried creating a references section, but I have so many one-time sources that the sections are very redundant and reduce the usability of the article. As such, I reverted myself. Raul654 17:49, 7 November 2007 (UTC)[reply]
- I don't want to move the history section (currently at the bottom) because it includes a technical description of some older machines, and an understanding of those technical concepts (explained in the proceeding sections) is necessary to full understand that section. I don't have a problem adding an alphabetized list of sources used. Raul654 17:38, 7 November 2007 (UTC)[reply]
Comment I am happy to see that Raul654, our featured article director, with all of his responsibilities, still has time to edit articles. While I have the utmost respect for Raul654, I am concerned that if he decides on the promotion of the article he himself nominated, it could create the appearance of a conflict of interest, and even the appearance of a conflict of interest can reflect poorly on the WP:FAC process and wikipedia itself. Even if Raul654 were to recuse himself in this case, reviewers might feel inhibited and the position of the nominator might influence their comments. Awadewit | talk 04:08, 8 November 2007 (UTC)[reply]
- I'm sure my various comments here on FAC provoke many responses, but I very much doubt that inhibiting is one of them. This is not my first featured article (I've notched something like 10 of them up). Raul654 04:30, 8 November 2007 (UTC)[reply]
- Well, I was alluding to your position as FAC director, not your comments or anything about you personally. It was more of an abstract argument about the petitioner and the judge being the same person (why have a jury, then?). Also, I do believe that the fact you have "notched" ten FAs could be used as evidence for my argument as well! :) However, no one else seems concerned about a possible COI, so I will post my review. Awadewit | talk 01:51, 10 November 2007 (UTC)[reply]
Oppose for now. As a lay reader, I was able to follow the majority of this page, which is a testament to the editor's ability to explain a complex subject, however there are several issues that still need to be worked out before I believe the article will meet the FA criteria. Whether or not these can be addressed in a week or two, I am not sure. I am certainly not qualified to judge the page's comprehensiveness.
Prose issues:
- The lead feels listy to me, particularly the third paragraph which introduces terms but does not define or explain them. Oddly, the lead was not as easy to understand as other parts of the article.
- Parallel computer programs are harder to write than sequential ones - add a "because" clause to make this explicitly clear to the reader
- In recent years, power usage in parallel computers has also become a great concern - why?
- Moore's Law, despite predictions of its end, is still in effect. - Explain Moore's Law; beginning the paragraph with this sentence is abrupt and makes it difficult for the reader to follow your explanation.
- Briefly explain the applicability or need for Amdahl's law and Gustafson's law before launching into an explanation of them - it is hard for readers to see why they need to know this information. Help them out.
- Locking multiple resources using non-atomic locks introduces the possibility of program deadlock. - Perhaps explain a bit more?
- "Some machines are hybrids of these categories, of course, but this classic model has survived because it is simple, easy to understand, and gives a good first approximation. It is also—perhaps because of its understandability—the most widely used scheme." - Tell the reader who is saying this - why should the reader trust this quotation? (Go through the article and check all quotations for this same problem.)
- Advancements in instruction level parallelism dominated computer architecture from the mid-1980s until the mid-1990s. - This sentence is just hanging off the end of the section.
- Parallel computers based on interconnect network need to employ some kind of routing to enable passing of messages between nodes that are not directly connected. - I assume this is supposed to be "interconnected"?
- There are quite a few one-sentence paragraphs; these should either be expanded or integrated into other paragraphs. If there is only one sentence, the reader wonders why it was included. There must either be more to say or it is indeed rarely worth including.
Images:
- What do we think about having an image at the beginning of the article? It would help attract readers. We must pander.
- All of the images are on the right-hand side of the article. WP:MOS#Images, and aesthetic common-sense, dictates some staggering.
MOS issues:
- The two bolded terms in the introduction are confusing and seem a bit misleading since the article is titled "parallel computing". If distributed computing is a term being introduced, which it looks like it is, it should be italicized per WP:ITALICS.
- Punctuation inside/outside quotation marks doesn't always follow WP:MOSQUOTE.
- Format all footnotes the same way - some use "pg" and some do not. Pick a style and stick with it.
- Spending a day perusing the WP:MOS might be worth it. I'm not a MOS guru by any means, but I did notice quite a few deviations from the rules I do happen to know. I just listed a few here.
Sourcing:
- For statements that include claims such as "first", "rarely used", "most", "best known", etc., please add citations to the article per WP:V. In my experience, these are the kinds of statements that are most often challenged.
- All sections need citations - not just for verification, but so the reader knows where to go for additional information. It is a courtesy and bolsters wikipedia's legitimacy. (Ex: "Multicore computing")
I hope this was helpful. Awadewit | talk 01:51, 10 November 2007 (UTC)[reply]
- Oppose for now. (These are my first FAC comments, so please let me know if I commit some faux pas)
- A few comments, mostly regarding the content:
- The intro could do a better job of introducing the concept.
- The first sentence could perhaps be rephrased to prose rather than a direct quote?
- While Amdahl's law is very important, its mention in the first paragraph seems a bit abruptly introduced.
- The first paragraph doesn't seem to "flow", but that may just be me. :)
- Background
- Parallel computing has a long history, but the background section treats it as a recent phenomen. It's true that it's only gone mainstream recently, but there is a long and interesting history here (Connection Machines, MTA, dataflow computing and other interesting early parallel architectures). Some of that is mentioned in the history section (why is that last in the article?), but I would like the background and history be merged and lead the article.
- One idea might be to first mention early architectures, mention why they weren't successful, then introduce the clock speed barrier hit in 2004 and say that this has renewed interest.
- Classes of parallel computing
- ASICs aren't usually referred to as "computers", and they're used in a very different way.
- The first half of your statement is correct. As far as being used in a different way -- they are functionally no different than any of the other specialized co-processors listed in that section (GPUs, FPGAs). Raul654 04:40, 9 November 2007 (UTC)[reply]
- I think the central difference is that an ASIC is fixed function, while FPGAs, GPUs and CPUs can be reprogrammed for different tasks. henrik•talk 12:57, 9 November 2007 (UTC)[reply]
- An ASIC and an FPGA are programmed *exactly* the same way, using exactly the same languages. The only difference is that you push a button to convert the HDL into a bitmapping for the FPGA, whereas with an ASIC, you send your HDL off to the foundry and they produce your chip. What's you're saying - that ASICs are domain specific and FPGAs are not - is true, but that's why they are listed in the 'specialized devices' section -- they are the ultimate in specialized devices (in that they are specialized for just one app). Raul654 13:02, 9 November 2007 (UTC)[reply]
- I think the central difference is that an ASIC is fixed function, while FPGAs, GPUs and CPUs can be reprogrammed for different tasks. henrik•talk 12:57, 9 November 2007 (UTC)[reply]
- The first half of your statement is correct. As far as being used in a different way -- they are functionally no different than any of the other specialized co-processors listed in that section (GPUs, FPGAs). Raul654 04:40, 9 November 2007 (UTC)[reply]
- Perhaps the section could be roughly classify into Multicores (traditional multicores, GPGPU and CELL), Distributed computing and others?
- This is identical to what I already have in the article, except it lumps "symmetric multiprocessors" with "specialized devices" into what you call others. I believe they (SMPs) are significant enough and distinct enough from the other specialized devices to have a section of their own. Raul654 04:40, 9 November 2007 (UTC)[reply]
- ASICs aren't usually referred to as "computers", and they're used in a very different way.
- Software
- I feel parallel programming languages deserve more than one single sentence paragraph.
- "Parallel programming languages remain either explicitly parallel or (at best) partially implicit with the programmer giving the compiler directives for parallelization." I'd argue that this is factually incorrect. For example pH (Parallel haskell), Sisal and Mitrion-C are all fully implictly parallel languages, though they are all fairly obscure. Perhaps it could be rephrased so that no mainstream or widely used languages are implicitly parallel?
- I agree. I spent the summer programming in Mitrion-C (In the next month or two, I have a work-in-progress paper coming out describing it; I should have a bigger, finalized version in the next half-year). The guy who invented SISAL spoke to my research group about a year ago. I've rewritten that section accordingly. Raul654 04:28, 9 November 2007 (UTC)[reply]
- General comments
- "Power and the heat problem" and the second half of "The move to (multicore) parallelism" should be merged.
- The software side of the problem and parallel programming models should be given more weight.
- The intro could do a better job of introducing the concept.
- In general, my personal opinion is that this is still quite far from the comprehensiveness (1b) criteria. Like Raul, I work in the field, and I would be happy to help, if you think my comments are justified. henrik•talk 13:17, 8 November 2007 (UTC)[reply]
- Object -
you said the introduction would be understandable by a layperson, but "operations are done simultaneously" to a layperson sounds medical. Even operations doesn't link to anything computer related - to mathematical functions it does.It seems that the reference list is way too short. I found 9 instances of the word "typically" (personal dislike) all but one unsourced. The picture with the caption "This will make the computation much faster than by optimizing part B, even though B got a bigger speed-up," seems a strange and unreasonable example, why 5x vs. 2x? It would be more reasonable to say "if there are two processors working separately" and "if they work together" which much more realistically shows the merits of parallel computing. This is a remarkably humane discussions about a candidacy, congratulations.--Keerllstalk 01:21, 10 November 2007 (UTC)[reply]- The first paragraph was recently changed by Henrik. I've changed operation to instruction (which is more accurate terminology) and linked it to the respective article.
- The purpose of the picture you mention is to demonstrate Amdahl's law diagrammatically, and it does that accurately. I have tweaked the caption to make this more apparent. The purpose of choosing 5 and 2 is to show that a smaller speed (2x) in a large section of code can be superior to a large improvement (5x) in a small part of code. Raul654 01:29, 10 November 2007 (UTC)[reply]
- That is true. It shows it drammatically, however, not very scientifically. Image:Amdahl-law.jpg is, although less æsthetically pleasing, more relevant. I notice you didn't say anything about typically. -"they are usually larger, typically having" has both typically and usually (redundant), weasel terms, and it is ugly, I'd suggest "some" or "the majority" if those are meant.--Keerllston 17:07, 10 November 2007 (UTC)[reply]
- The diagram is actually about programming for parallel computers rather than Amdahl's law itself.--Keerllston 15:30, 12 November 2007 (UTC)[reply]
- It is neither a good illustration of gustafon's law nor of amdahl's law. It is instead a proper illustration of "Speedup in a sequential program". Which does not seem to be notably mentioned. The difference between sequencial programming doesn't seem to be mentioned much. Not to mention "By working very hard" is not an encyclopedic tone.--Keerllston 19:38, 14 November 2007 (UTC)[reply]
- You're right about the tone. I copied the caption verbatim from Amdahl's law. I've tweaked it now to fix the problem. As far as illustrating Amdahl's law, you're just plain wrong. First, the article talks extensively about parallelization of sequential programs. The article already says "Any large math or engineering problem will typically consists of several parallelizable parts and several non-parallelizable (sequential) parts." It further said "where S is the speedup of the program (as a factor of its original runtime), and P is the fraction that is parallelizable." - I've added the word sequential in here just to make it explicit. The difference between sequential and parallel programming is covered in background sections 1.2, 1.3, and 1.4. (1.3 in particular). Raul654 20:21, 14 November 2007 (UTC)[reply]
- It is neither a good illustration of gustafon's law nor of amdahl's law. It is instead a proper illustration of "Speedup in a sequential program". Which does not seem to be notably mentioned. The difference between sequencial programming doesn't seem to be mentioned much. Not to mention "By working very hard" is not an encyclopedic tone.--Keerllston 19:38, 14 November 2007 (UTC)[reply]
- it has a two paragraph "History" as a topic? - ??? This should be reformatted or expanded or deleted. There's a bit of history in the background, perhaps there?--Keerllston 19:38, 14 November 2007 (UTC)[reply]
- We cannot describe the history of parallel computing until the very end of the article, for the simple reason that such a section would be meaningless until the basic concepts are explained. The background mentions - in passing - that parallel computing is a hot topic now because of the end of frequency scaling. This is a fact which itself is explained at fair length there, and which is not heavily predicated on understanding many other topics. But beyond that, such a discussion is not appropriate until the end of the article. Raul654 20:15, 14 November 2007 (UTC)[reply]
- comment Bit-level parallelism is a red-linked article. If no such article exists, just expand the section in this article on this subject and get rid of the link Hmains 20:50, 10 November 2007 (UTC)[reply]
- I've copied (but not moved) much of the bit level parallelism information into a seperate bit-level parallelism article. If someone wants to expand that article beyond what I've copied from this one, that's fine by me, but it's otherwise unrelated to this FAC nomination. Raul654 15:27, 13 November 2007 (UTC)[reply]
- The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made on the article's talk page or in Wikipedia talk:Featured article candidates. No further edits should be made to this page.