Wikipedia:Featured article candidates/Parallel computing
- The following is an archived discussion of a featured article nomination. Please do not modify it. Subsequent comments should be made on the article's talk page or in Wikipedia talk:Featured article candidates. No further edits should be made to this page.
The article was promoted 18:17, 6 May 2008.
This is one of mine. I spent some time last year improving it, and I think it's up to FA status. I previously nominated it, and I believe all of the substantive suggestions have been addressed. Raul654 (talk) 07:40, 18 April 2008 (UTC)[reply]
- Comment, a very well written (and accessible) introduction to parallel computing. Sections such as the one covering GPGPUs seems a bit too choppy due to the one liner paragraphs, and could do with some editing to ensure better flow. Specialized hardware like Systolic arrays seem to have been left out. Also the hardwares described seems to focus a bit to much on von Neumann Machines; tackling parallelism from an Anti machine point of view seems to be absent. I am not exactly sure if they need to be in an introductory article, but does seem to be serious enough to hold back a support (I am open to discussion, though). But overall, is a very good article. Now we need one that tackles parallel/concurrent programming (from a programming language/compiler point of view). --soum talk 09:17, 18 April 2008 (UTC)[reply]
- The article focuses on von Neumann Machines because more-or-less all modern computers are von Neumann Machines. Antimachines (that is, FPGAs and alike) are a relatively new area of parallel computing, covered in the "Reconfigurable computing with field-programmable gate arrays" section. This is appropriate weight given that they are rather small players in the field. Systolic arrays are mentioned briefly in the Flynn's taxonomy section, but I don't go into detail nobody ever figured out what they were useful for. Raul654 (talk) 17:54, 18 April 2008 (UTC)[reply]
- I would have to agree with Raul here that the discussion of other computing architectures is given sufficient space. I had a similar objection at the first FAC, and I did some small additions to try to rephrase some parts from using von-Neumann specific terms to a more generic dependency discussion - perhaps that could be done a bit more in some parts (the "Instruction-level parallelism" section comes to mind) but in general I think the von Neumann focus is ok. henrik•talk 21:11, 21 April 2008 (UTC)[reply]
- The article focuses on von Neumann Machines because more-or-less all modern computers are von Neumann Machines. Antimachines (that is, FPGAs and alike) are a relatively new area of parallel computing, covered in the "Reconfigurable computing with field-programmable gate arrays" section. This is appropriate weight given that they are rather small players in the field. Systolic arrays are mentioned briefly in the Flynn's taxonomy section, but I don't go into detail nobody ever figured out what they were useful for. Raul654 (talk) 17:54, 18 April 2008 (UTC)[reply]
Oppose—Support: much improved! Tony (talk) 13:44, 26 April 2008 (UTC)The writing needs a thorough cleanse. This article could scrub up very nicely, though. Here are random examples from the top that demonstrate the density of issues throughout.[reply]
- "Parallel computing ... computing ... carried out ... Parallel computing ... carried out ... parallel ... Parallel computing" in the first two and a half lines. (That is, lotsa repetition, just where we want to engage readers.)
- "bit-level parallelism, instruction level parallelism ... high performance computing"—we have to pipe to make up for the deficiencies in punctuation of many article titles. The first is properly hyphenated. His Grace has been on the warpath fixing dashes and hyphens in article titles, and good on him.
- I don't understand this comment. Raul654 (talk) 05:27, 19 April 2008 (UTC)[reply]
- Raul, piping would yield this succession: ""bit-level parallelism, instruction-level parallelism ... high-performance computing", which is not only consistent, but follows the best American editing practices as seen in publications such as Scientific American. Tony (talk) 06:26, 19 April 2008 (UTC)[reply]
- Fixed. Raul654 (talk) 03:13, 21 April 2008 (UTC)[reply]
- Raul, piping would yield this succession: ""bit-level parallelism, instruction-level parallelism ... high-performance computing", which is not only consistent, but follows the best American editing practices as seen in publications such as Scientific American. Tony (talk) 06:26, 19 April 2008 (UTC)[reply]
- I don't understand this comment. Raul654 (talk) 05:27, 19 April 2008 (UTC)[reply]
- "getting good parallel program performance"—inelegant; what about "achieving good ..."?
- Fixed. Raul654 (talk) 05:27, 19 April 2008 (UTC)[reply]
- MOS breach: hyphen used as an interrupter. Spaced en dash or unspaced em dash—take your pick, and needs to be applied consistently.
- I think this is taken care of. Raul654 (talk) 18:32, 21 April 2008 (UTC)[reply]
- "Parallel computing on the other hand uses multiple processing elements ..."—Most readers would agree that two commas are required.
- Fixed. Raul654 (talk) 05:27, 19 April 2008 (UTC)[reply]
- "... a problem. The problem ..." Tony (talk) 12:07, 18 April 2008 (UTC)[reply]
- Fixed. Raul654 (talk) 05:27, 19 April 2008 (UTC)[reply]
Comments
- Current ref 7 "J. M Rabaey Digital Integrated Circuits" is missing page numbers.
- Likewise current ref 12 K Hwang and F. A. Briggs, Computer architecture...
http://nhse.org/index.htm dead links for me.- What makes http://www.webopedia.com/TERM/c/clustering.html a reliable source?
- All other links checked out okay. Ealdgyth - Talk 14:21, 18 April 2008 (UTC)[reply]
- I fixed the Rabaey reference; the Hwang/Briggs reference was added by Henrik in this edit - you'll have to ask him for page numbers. I've removed nhse.org was an external link as there's no shortage of them; I consider webopedia reliable because to be frank, nothing I've seen on that site struck me as being incorrect. Raul654 (talk) 18:52, 18 April 2008 (UTC)[reply]
- I've dropped a note on user talk:Henrik asking him to supply a page number. Raul654 (talk) 05:24, 19 April 2008 (UTC)[reply]
- I no longer have that book, but it is fairly general information which can easily be cited from a different source. I've done so. henrik•talk 21:00, 21 April 2008 (UTC)[reply]
- I've dropped a note on user talk:Henrik asking him to supply a page number. Raul654 (talk) 05:24, 19 April 2008 (UTC)[reply]
- I fixed the Rabaey reference; the Hwang/Briggs reference was added by Henrik in this edit - you'll have to ask him for page numbers. I've removed nhse.org was an external link as there's no shortage of them; I consider webopedia reliable because to be frank, nothing I've seen on that site struck me as being incorrect. Raul654 (talk) 18:52, 18 April 2008 (UTC)[reply]
Oppose. I agree with Tony's comment about the article needing a thorough seeing to. There are way too many MoS glitches and prose problems as it stands. The article is also severely under-referenced, with too many sections being completely unreferenced.
- "Traditionally, computer software has been written for serial computation". Traditionally? Which tradition is that? The article only discusses parallelism in digital computing; it ought to be made clear that analogue or quantum computing, for instance, are not covered.
- Analog computing may or may not be parallel (depending on the design). Regardless, it went the way of the dinosaur over 40 years ago.
- Quantum computing and DNA computing both might be parallel (at least theoretically) but much like cold fusion both of them are in their proto-infancy. Neither of them has ever produced a single useful result. (The most complicated quantum computing program I've heard of factors numbers up to 10). By "tradionally", I'm referring to the fact that 99% (or more) of source code out there is sequential. Raul654 (talk) 02:32, 19 April 2008 (UTC)[reply]
- "Only one instruction may execute at a given time – after that instruction is finished, the next is executed." Seems to ignore pipelining, in which a processor will work on several instructions in parallel, although admittedly only executing one at a time.
- First, it doesn't ignore pipelining - it talks about pipelining at length in the instruction parallelism section. Second, as you said, a pipelined processor only executes one instruction at a time (although it, by defintion, has several coming through the pipeline at a time). Raul654 (talk) 02:33, 19 April 2008 (UTC)[reply]
- The fragment I quoted says "at a given time". What does a given time mean? --Malleus Fatuorum (talk) 02:55, 19 April 2008 (UTC)[reply]
- That word can be dropped and the sentence still means the same thing. Raul654 (talk) 04:43, 19 April 2008 (UTC)[reply]
- Fixed. Raul654 (talk) 05:23, 19 April 2008 (UTC)[reply]
- That word can be dropped and the sentence still means the same thing. Raul654 (talk) 04:43, 19 April 2008 (UTC)[reply]
- The fragment I quoted says "at a given time". What does a given time mean? --Malleus Fatuorum (talk) 02:55, 19 April 2008 (UTC)[reply]
- First, it doesn't ignore pipelining - it talks about pipelining at length in the instruction parallelism section. Second, as you said, a pipelined processor only executes one instruction at a time (although it, by defintion, has several coming through the pipeline at a time). Raul654 (talk) 02:33, 19 April 2008 (UTC)[reply]
- "The total runtime of a program is proportional to the total number of instructions multiplied by the average time per instruction." Proportional to? Isn't it equal to? Total runtime? Total number of instructions?
- Fixed. Raul654 (talk) 05:23, 19 April 2008 (UTC)[reply]
- "... generally cited as the end of frequency scaling as the dominant computer architecture paradigm." Paradigm?
- Yes, as in a general way of doing things. Paradigm: a philosophical or theoretical framework of any kind - http://www.merriam-webster.com/dictionary/paradigm Raul654 (talk) 05:01, 19 April 2008 (UTC)[reply]
- In Flynn's taxonomy, the text overwrites the table.
- I have been unable to reproduce this error. Raul654 (talk) 05:01, 19 April 2008 (UTC)[reply]
- "... advancements in computer architecture were done by doubling computer word size—the amount of information the processor can execute per cycle." Advancements? Were done? Increased word size increases the extent of available memory; it doesn't per se increase the amount of information that can be processed per cycle. What does "information" mean in this context anyway?
- "advancements in computer architecture were done" - I suppose this could be rephrased. "Advancements in computer architecture were driven by doubling..."
- Increased word size increases the extent of available memory; -
true.Just one quick caveat here - increases in word size do not increase the amount of memory; they increase the amount of *addressable* memory. Raul654 (talk) 06:32, 19 April 2008 (UTC)[reply] - it doesn't per se increase the amount of information that can be processed per cycle. - very, very, very false. An 8-bit processor processes data in chunks (called "words" - see Word (computing)) of 8 bits; a 32-bit processor processes data in chunks of 32 bits. It can do 4 times as much computation in a cycle as an 8-bit processor can. Raul654 (talk) 05:07, 19 April 2008 (UTC)[reply]
- Please don't try to patronise me. The number of bits assigned to the address of the data to be worked on does not of itself increase the amount of data that can can be worked on per cycle. --Malleus Fatuorum (talk) 01:04, 24 April 2008 (UTC)[reply]
- I'm not patronizing you - you're completely and totally wrong. The word size is not just the number of bits used to address memory, it's also the size of the registers inside the processor. If you do "add R1, R2, R3" on an 8-bit processor, it adds two 8-bit registers and stores the value into a third 8-bit register; if you do it on a 32-bit processor, it adds two 32 bit registers and stores them into a 32 bit register -- 4 times as much work in a single cycle. Raul654 (talk) 01:08, 24 April 2008 (UTC)[reply]
- I don't believe that you fully understand what you're talking about. My view remains that this article should not be featured for both prose and technical reasons. Others may decide for themselves, but the article would have to improve dramatically before I'd consider supporting it. --Malleus Fatuorum (talk) 01:35, 24 April 2008 (UTC)[reply]
- The article is correct as it stands. The "error" you have described is a result of your misconception that a word pertains only to the size of the address space. It does not. Raul654 (talk) 02:18, 24 April 2008 (UTC)[reply]
- I don't believe that you fully understand what you're talking about. My view remains that this article should not be featured for both prose and technical reasons. Others may decide for themselves, but the article would have to improve dramatically before I'd consider supporting it. --Malleus Fatuorum (talk) 01:35, 24 April 2008 (UTC)[reply]
- I'm not patronizing you - you're completely and totally wrong. The word size is not just the number of bits used to address memory, it's also the size of the registers inside the processor. If you do "add R1, R2, R3" on an 8-bit processor, it adds two 8-bit registers and stores the value into a third 8-bit register; if you do it on a 32-bit processor, it adds two 32 bit registers and stores them into a 32 bit register -- 4 times as much work in a single cycle. Raul654 (talk) 01:08, 24 April 2008 (UTC)[reply]
- Please don't try to patronise me. The number of bits assigned to the address of the data to be worked on does not of itself increase the amount of data that can can be worked on per cycle. --Malleus Fatuorum (talk) 01:04, 24 April 2008 (UTC)[reply]
These are just some examples of what needs to be addressed in this article, there are very many more. --Malleus Fatuorum (talk) 01:56, 19 April 2008 (UTC)[reply]
- Note: Following Tony's copyedit and Epbr123's MoS review, Tony1, Laser brain and GrahamColm are satisfied with the prose, and I can't detect any remaining MoS issues. SandyGeorgia (Talk) 17:55, 6 May 2008 (UTC)[reply]
- image:numa.jpg is a really bad image, with lines of different widths that aren't aligned and are overlapping the text, the acronym DSM is never mentioned in the caption or article (only somewhat distantly as "distributed shared memory"), and is inappropriately compressed as a JPEG to boot. It should be recreated as an SVG. The other PNGs in the article would benefit from being recreated as SVGs too, though they are decent-looking. — brighterorange (talk) 03:30, 19 April 2008 (UTC)[reply]
- Fixed. Raul654 (talk) 04:59, 19 April 2008 (UTC)[reply]
- Comment: The second and third lines in Image:Optimizing-different-parts.svg appear to switch "A" and "B". Compare with Image:Optimizing-different-parts.png. NatusRoma | Talk 02:25, 20 April 2008 (UTC)[reply]
- True - fixed. Raul654 (talk) 05:47, 20 April 2008 (UTC)[reply]
- Oppose: This important article needs further improvement before it is ready for FA status.
- Questions: What style guidelines is the proposer using for notes and referencing? We see pg and pgs as abbreviations for page and pages (with what warrant, from WP guidelines or elsewhere?). We see et al. sometimes in italics, sometimes lacking its full stop. We see quoted material surrounded by quote marks but also italicised – or just italicised. We see an opening double quote mark matched with a closing single quote mark. Some citations end with a full stop, while similar ones end without. Spaces are apparently inserted or omitted as if they were optional decorations, as in July 1998, 19(2):pgs 97–113(17). (What does the (17) mean, by the way?) I am surprised that I find no specific comment on formatting of references, above. I will oppose until some effort is made to fix it. I might help to fix it, once I see that the problem is taken seriously and worked on.
- –⊥¡ɐɔıʇǝoNoetica!T– 23:59, 21 April 2008 (UTC)[reply]
- Noetica, can you pls point us to the guideline that deals with pgs vs pp. and the correct usage of et al? As I recall, when we last fought out et al at MOS, there was no conclusion. SandyGeorgia (Talk) 03:17, 22 April 2008 (UTC)[reply]
- My reading of the guidelines for page numbers (WP:CITATION and related MOSpages) turns up only inconsistencies and a failure to address the issue. There are examples with no abbreviation at all ("93–107"), with "p." or "pp." and a space ("pp. 93–107"), with "p." or "pp." but no space ("pp.93–107"). Editors also use "p" or "pp" with or without a space ("pp 93–107"; "pp93–107"), or "page" or "pages" ("pages 93–107"), or more rarely (and without support from style guides) "pg" or "pgs", with or without a space or a full stop ("pgs.93–107"; "pgs 93–107"; etc.).
- Myself I recommend only the first two of these styles. They are the only ones that major style guides advocate: ("93–107"; "pp. 93–107", preferably done with a hard space: "pp. 93–107").
- In particular, here I have asked why "pg" has been used. No reputable style guide that I know of gives it express support, and most well-edited articles do not use it.
- But what loses me immediately is editors' inconsistency. In the present article we see this with "et al." (which almost all authorities want unitalicised and with a full stop). If an article comes before us here with three versions of the thing, I cannot think that the proposer is serious. In the present case, I have already shown that I am ready to help, once I can see that the proposer is paying attention.
- –⊥¡ɐɔıʇǝoNoetica!T– 04:13, 22 April 2008 (UTC)[reply]
- Well, I actually went in and did the cleanup you requested myself for Raul, since I have never encountered this kind of oppose before at FAC, and there are no guidelines. I did all I could; if you still see something, it should be minor enough that you might address it yourself. When there's no guideline, it's hard to know how to fix something, and even after all our discussion at MOS, I don't know how to use et al, because we came to no conclusion in those acrimonious MoS discussions. Hard to ask an editor to fix something that has no Wiki guideline. SandyGeorgia (Talk) 04:26, 22 April 2008 (UTC)[reply]
- Thanks! Raul654 (talk) 21:32, 22 April 2008 (UTC)[reply]
- Well, I actually went in and did the cleanup you requested myself for Raul, since I have never encountered this kind of oppose before at FAC, and there are no guidelines. I did all I could; if you still see something, it should be minor enough that you might address it yourself. When there's no guideline, it's hard to know how to fix something, and even after all our discussion at MOS, I don't know how to use et al, because we came to no conclusion in those acrimonious MoS discussions. Hard to ask an editor to fix something that has no Wiki guideline. SandyGeorgia (Talk) 04:26, 22 April 2008 (UTC)[reply]
- Noetica, can you pls point us to the guideline that deals with pgs vs pp. and the correct usage of et al? As I recall, when we last fought out et al at MOS, there was no conclusion. SandyGeorgia (Talk) 03:17, 22 April 2008 (UTC)[reply]
- Note: Following Tony's copyedit and Epbr123's MoS review, Tony1, Laser brain and GrahamColm are satisfied with the prose, and I can't detect any remaining MoS issues. SandyGeorgia (Talk) 17:55, 6 May 2008 (UTC)[reply]
- Comment: I found this article excessively detailed and technical, and I have a bachelor's in Computer Science from MIT. For example, my eyes glazed over at the explanation of the power consumption equation; I don't see why it is necessary to include this instead of simply noting that increasing frequency increases power consumption.
- I don't know if this is a consideration for FA's, so I will not vote. --Slashem (talk) 18:41, 22 April 2008 (UTC)[reply]
- It should surprise no one that the article is technical - it's a highly technical topic. The question is accessibility, and several reviewers have explicitly noted that it is accesible to laypeople (user:soum's comment above; user:Awadewit's comment during the previous FAC). Raul654 (talk) 21:28, 22 April 2008 (UTC)[reply]
- Perhaps you could answer my specific example. When you are done, I have more. --Slashem (talk) 21:32, 22 April 2008 (UTC)[reply]
- I give the equation and the conclusion drawn from it because it is more pedagogically sound than simply giving the conclusion. (It also happens to be a rather important equation for computer engineers - one of the few really important equations in computer engineering, actually). Raul654 (talk) 21:34, 22 April 2008 (UTC)[reply]
- Perhaps you could answer my specific example. When you are done, I have more. --Slashem (talk) 21:32, 22 April 2008 (UTC)[reply]
- It should surprise no one that the article is technical - it's a highly technical topic. The question is accessibility, and several reviewers have explicitly noted that it is accesible to laypeople (user:soum's comment above; user:Awadewit's comment during the previous FAC). Raul654 (talk) 21:28, 22 April 2008 (UTC)[reply]
- "Pedagogically sound?" I don't use Wikipedia as a textbook, not to mention this is hypertext. Your audience is not composed of computer engineers. --Slashem (talk) 21:40, 22 April 2008 (UTC)[reply]
- Yes, it is pedagogically sound, meaning 'this is a good way of making the information comprehensible'. As for the audience, I'm aware they are not computer engineers. But as I have already said, several laypeople (like Awadewit, an english-lit major) have already said they found it accessible. Thus, I conclude that I am doing it correctly. Raul654 (talk) 21:44, 22 April 2008 (UTC)[reply]
- "Pedagogically sound?" I don't use Wikipedia as a textbook, not to mention this is hypertext. Your audience is not composed of computer engineers. --Slashem (talk) 21:40, 22 April 2008 (UTC)[reply]
- Fine, you don't value my opinion, I won't give it to you again. --Slashem (talk) 21:47, 22 April 2008 (UTC)[reply]
- It's not that I don't value your opinion. However, the one specific suggestion you have given - that I should remove the equation (or at least that was the clear implication of your comments) - would in my opinion not be an improvement. Do you have any more specific suggestions? Raul654 (talk) 21:58, 22 April 2008 (UTC)[reply]
- Apparently we have a philosophical disagreement, which may perhaps be best explored on the talk page. --Slashem (talk) 22:05, 22 April 2008 (UTC)[reply]
- It's not that I don't value your opinion. However, the one specific suggestion you have given - that I should remove the equation (or at least that was the clear implication of your comments) - would in my opinion not be an improvement. Do you have any more specific suggestions? Raul654 (talk) 21:58, 22 April 2008 (UTC)[reply]
- Fine, you don't value my opinion, I won't give it to you again. --Slashem (talk) 21:47, 22 April 2008 (UTC)[reply]
- See Relational database for an example of an article which describes a technical topic in an accessible way while leaving more detailed and technical aspects to other articles which can be linked to. --Slashem (talk) 21:40, 22 April 2008 (UTC)[reply]
BTW if you want me to shut up, you can just admit it's not a consideration for FA's, since this is the FAC page. You don't have to try to argue about the facts, the way Bush tried to deny Global Warming. --Slashem (talk) 21:43, 22 April 2008 (UTC)[reply]
- I think this is a good article on a topic that needed coverage here. I've merged some or all of the choppy parastubs and gone through it leaving a few inline queries. Happy to cahnge to support when they're dealt with. Tony (talk) 06:43, 23 April 2008 (UTC)[reply]
Replying to Tony's first inline citation:
No program can run more quickly than the longest chain of dependent calculations (known as the [[critical path]]), <!--fix the next clause: doesn't make sense-->since the fact that the dependent calculations force an execution order. <!--And the next sentence ...-->Fortunately, most algorithms do not consist of a long chain of dependent calculations and little else; opportunities usually exist to execute independent calculations in parallel.
Let's say you have something that looks like this:
- A = something
- B = f(A)
- C = f(B)
- D = f(C)
- E = f(D)
You have to know A before you calculate B, calculate B before C, calculate C before D, etc. That is what the first sentence means. The second sentence says that in real life, this is not this is not a common situation. It's more common to see something like this:
- A = something
- B1 = f(A)
- B2 = f(A)
- C1 = f(B1)
- C2 = f(B1+A)
- C3 = f(B2)
- C4 = f(B1+B2)
- D1 = f(C1)
- D2 = f(C2+B2)
- D3 = f(C3+B2)
- D4 = f(C4+B2)
- E1 = f(D4)
The first example consisted of one chain of dependent operations with nothing else to do - there was no opportunity for parallelism. Unlike the first example, which had a critical path (the longest chain of operations that must be executed one-after-another) and nothing else to do, this has a critical path (which I think is A->B1, B2->C4->D4->E1) and lots of other things to do. This will parallelize much better than the above. Any suggestions for how to rephrase the article to make this clearer? Raul654 (talk) 07:00, 23 April 2008 (UTC)[reply]
- I think the second clause raised by Tony merely has a piece of misplaced text that renders it confusing. It could be:
- No program can run more quickly than the longest chain of dependent calculations (known as the critical path), since dependent calculations force an execution order. However, most algorithms do not consist of a long chain of dependent calculations; opportunities usually exist to execute independent calculations in parallel.
- or
- No program can run more quickly than the longest chain of dependent calculations (known as the critical path), since calculations that depend upon prior calculations in the chain must be executed in order. However, most algorithms do not consist of only a long chain of dependent calculations; there are usually opportunities to execute independent calculations in parallel.
- Sorry, I'm not a good word nerd, Tony might improve. SandyGeorgia (Talk) 07:12, 23 April 2008 (UTC)[reply]
- I used Sandy's second paragraph from above. I also went over all the things Tony commented on - most were good, I tweaked one or two of them. I think I've addressed all of the above objections now. Raul654 (talk) 00:42, 24 April 2008 (UTC)[reply]
For the record, I do not believe there are any remaining unaddressed objections. Raul654 (talk) 17:32, 26 April 2008 (UTC)[reply]
- Support. Fulfills the FA criteria. Some comments, though: (a) "Only recently, with the advent of x86-64 architectures..." - for a topic that dates quickly (for those of us not in the know), could a more quantitative date/date-range be used?
(b) I don't see anything from the "Hardware" section in the lead(c) "Automatic parallelization of a sequential program by a compiler is the "holy grail" of parallel computing." - It may be an obvious/shared feeling in the computing world, but here I think it best to cite/attribute such grand statements. maclean 19:18, 27 April 2008 (UTC)[reply]
- I've fixed all of the above - added a date range for recently, hardware information to the lead, and a citation for the holy grail sentence and the one after it. Raul654 (talk) 07:28, 29 April 2008 (UTC)[reply]
- "Ideally, the speed-up from parallelization should be linear—doubling the number of processing elements should halve the runtime". I added "for a fixed problem" to this, but I guess someone removed it. If you double the number of processing elements, and also double the work, you don't "ideally" expect half the runtime. A few Hardware sections look short and could be filled out with how each corresponds to a Type of parallelism. Quod? - (Dic nobis) 19:02, 30 April 2008 (UTC)[reply]
- To respond to your first point - ideally, doubling the number of processors halves the runtime for any problem, be it of fixed size (like finding the first 100 prime numbers), or unfixed size (like a simulation with a variable resolution). The latter is referred to as "soft scaling". Increasing the amount of work to do obviously increases the runtime, for problems of both fixed and unfixed size. It is misleading to say "for a fixed problem size" when, in fact, linear optimal parallelization applies to problems of both fixed and unfixed sizes. (I'm the one who removed it) It would have been correct to say "for a problem of some given size" but I would assume people are smart enough to figure that one out.
- As for your second suggestion, I don't follow -- generally, all of the types of parallel computers listed in the hardware section (multicore, multiprocessors, cluster, MPP, and grid) can and do implement all the types of parallelism listed in the article (bit, instruction, data, and task). Only the computers listed in the 'specialized hardware' section (GPUs, FPGAs, and Vector processors) are particularly associated with certain types of parallelism (specifically, they are all particularly good at bit and data parallelism). Raul654 (talk) 05:11, 1 May 2008 (UTC)[reply]
- I think something ought to be said about problem size. Much of the parallel computing sector views additional processors as a way to do a larger problem, not so much a way to do a problem faster. Talking about runtime and Amdahl's law excludes the former. About the Hardware sections, they look short, and one suggestion for some additional content might be to mention types of applications typically found with SMP or MPP. Another suggestion would be to mention some of the network forms used in distributed computing and algorithm analysis (mesh, hypercube, and so forth). Quod? - (Dic nobis) 18:52, 1 May 2008 (UTC)[reply]
- I've added "for some given problem size" to that section.
- As for applications, you make a very good point. I've added an applications section, with a listing of the most common parallel computing problems (taken from the Berkeley paper).
- Network forms are already discussed in the memory section. All of the network topologies you just mentioned are already explicitly mentioned in the article. Raul654 (talk) 06:01, 2 May 2008 (UTC)[reply]
- I missed the network forms. Content looks good. Quod? - (Dic nobis) 00:21, 4 May 2008 (UTC)[reply]
- I think something ought to be said about problem size. Much of the parallel computing sector views additional processors as a way to do a larger problem, not so much a way to do a problem faster. Talking about runtime and Amdahl's law excludes the former. About the Hardware sections, they look short, and one suggestion for some additional content might be to mention types of applications typically found with SMP or MPP. Another suggestion would be to mention some of the network forms used in distributed computing and algorithm analysis (mesh, hypercube, and so forth). Quod? - (Dic nobis) 18:52, 1 May 2008 (UTC)[reply]
- Support, concerns addressed or not deal-breakers. --Laser brain (talk) 01:54, 5 May 2008 (UTC)
Comments, very good article, almost there. The prose is excellent. I read this from a "general audience" point of view since I am not familiar with the subject matter, and I think it makes a very good reference. Wikilinks are present and appropriate for context and further understanding. A few minor issues:[reply]"It has been used for many years, mainly in high-performance computing..." Can we make this active voice and say who has been using it? Universities, research labs, etc?- You're asking me to pidgeon-hole an entire industry and that simply cannot be done. The simplest answer is that everyone uses parallel computing in some form or another - every microprocessor since the 70s has had built-in bit-level parallelism and most every microprocessor since the 80s has had built-in instruction-level parallelism. Multicore-parallelism appears to be headed for the same level of ubiquity. On the flip-side, SMPs and distributed parallel systems (clusters, MPPs, grids) were for many years used in both academia (for research) and industry (to solve real-world problems), and I expect this trend to continue. Raul654 (talk) 17:48, 4 May 2008 (UTC)[reply]
Hyphen in lead should be a spaced en dash or unspaced em dash.- Fixed. Raul654 (talk) 17:29, 4 May 2008 (UTC)[reply]
In the Dependences section, I think you need some verbal cue when you are transitioning from explanatory text to an example. It didn't flow well for me.- Fixed. Raul654 (talk) 17:29, 4 May 2008 (UTC)[reply]
- The code box that has the "pseudocode that computes the first few Fibonacci numbers" runs under Image:Superscalarpipeline.png for me - not a very clean look.
- I'm assuming by "runs under" he means the picture thumbnail and the code box overlap somehow. I have been unable to reproduce this error - looks fine in both IE and Firefox up to and including 1024x768 resolution.
- It's not that important. The code box basically appears to be determined to go all the way across the screen whether there's an image there or not. Text wraps around the image, but the image and the code box ignore each other when they cross paths. At least the code box goes under the image. --Laser brain (talk) 01:54, 5 May 2008 (UTC)[reply]
- I'm assuming by "runs under" he means the picture thumbnail and the code box overlap somehow. I have been unable to reproduce this error - looks fine in both IE and Firefox up to and including 1024x768 resolution.
"This is broadly analogous to the distance between basic computing nodes." Avoid beginning sentences with "this" in reference to a previous idea. Please restate the idea. "This classification system is..."- Fixed. Raul654 (talk) 17:29, 4 May 2008 (UTC)[reply]
- I think this topic could benefit from a "further reading" heading where you might list other prominent texts in the field that you may not have cited. --Laser brain (talk) 05:59, 2 May 2008 (UTC)[reply]
- Comment. For such a broad subject, the spectrum of cited sources seems quite poor. The article should mirror the scholar literarture. -Dany —Preceding unsigned comment added by 88.44.97.210 (talk) 10:12, 2 May 2008 (UTC)[reply]
- Patterson and Hennessy's textbooks, cited a number of times in the article, are considered the gold standard for computer engineering. Just read their ACM Queuecast introduction, which starts out by saying they probably don't need an introduction, since you've probably already heard of them. (Or don't take my word for it: - With co-Fellow Patterson, Hennessy co-wrote two engineering textbooks on leading edge computer architecture and design that have been used around the world. These texts have been updated four times and are considered the gold standard in this field. [1]) Raul654 (talk) 12:13, 2 May 2008 (UTC)[reply]
- Support. I have made some (minor) suggestions, [2], but I won't lose any sleep if they are reverted. (I grew tired of reading a number of). It takes a lot of skill to write an accessible, encyclopedic article on computing; there are many neologisms to juggle with. This valiant effort has been successful. GrahamColmTalk 17:25, 2 May 2008 (UTC)[reply]
- Oppose – For starters, there's a lot of redundant text ("can almost always"). Secondly, there are several technical terms use that have no meaning to a first time reader unless he clicks on it. Try and provide some background context for terms such as "race condition" etc in the lead. The ToC looks bloated and untidy. Another point: When you talk about frequency scaling, it is mentioned that it was standard till 2004? What happened afterward? The current technology trend and limitation of frequency scaling could be mentioned. What would be ideal for this article is an animated image. Can this be arranged? 11:43, 4 May 2008 (UTC) — Preceding unsigned comment added by Nichalp (talk • contribs)
- Try and provide some background context for terms such as "race condition" etc in the lead. - the lead is there to summarize the topic, not to provide background information on everything discussed therein. Background information goes in the "Background" section. If people want to know what race conditions are, they can read past the lead into the article, or click on the term (because it's linked).
- Exactly my point. The lead should tell you about the article in short without expecting the user to navigate away from the page. Think of the parallel in a print encyclopedia:: Race condition (For more information, see Race condition on page 1024). It doesn't read right, and nor can you expect a user (especially a non technical user -- doesnt apply to me btw) to scroll down to check and see if the meaning of a race condition is covered or not. You need to provide some context so that the meaning can roughly be deduced. =Nichalp «Talk»= 18:50, 6 May 2008 (UTC)[reply]
- The TOC is substantial, but I don't think it's overwhelming. Looking at the articles promoted to FA last week, the TOC in this article is on par with Degrassi: The Next Generation, Marjory Stoneman Douglas, and 2005 ACC Championship Game and others.
- This TOC passed FAC with 15 supports. SandyGeorgia (Talk) 02:47, 5 May 2008 (UTC)[reply]
- But it certainly can be minimized by summarising subsections. =Nichalp «Talk»= 18:50, 6 May 2008 (UTC)[reply]
- This TOC passed FAC with 15 supports. SandyGeorgia (Talk) 02:47, 5 May 2008 (UTC)[reply]
- When you talk about frequency scaling, it is mentioned that it was standard till 2004? What happened afterward?
- This is covered in the background section: Increasing processor power consumption led ultimately to Intel's May 2004 cancellation of its Tejas and Jayhawk processors, which is generally cited as the end of frequency scaling as the dominant computer architecture paradigm.[8] Moore's Law is the empirical observation that transistor density in a microprocessor doubles every 18 to 24 months. Despite power consumption issues, and repeated predictions of its end, Moore's law is still in effect. With the end of frequency scaling, these additional transistors (which are no longer used for frequency scaling) can be used to add extra hardware for parallel computing.
- ) Wasn't exactly asking for the full answer, rather a brief mention of the context. I see it needs a copyedit for a better flow. I'll try and work out a draft tomorrow. =Nichalp «Talk»= 18:50, 6 May 2008 (UTC)[reply]
- This is covered in the background section: Increasing processor power consumption led ultimately to Intel's May 2004 cancellation of its Tejas and Jayhawk processors, which is generally cited as the end of frequency scaling as the dominant computer architecture paradigm.[8] Moore's Law is the empirical observation that transistor density in a microprocessor doubles every 18 to 24 months. Despite power consumption issues, and repeated predictions of its end, Moore's law is still in effect. With the end of frequency scaling, these additional transistors (which are no longer used for frequency scaling) can be used to add extra hardware for parallel computing.
- The current technology trend and limitation of frequency scaling could be mentioned. -- It is. See my response to your previous point.
- "What would be ideal for this article is an animated image. Can this be arranged?" - an animated image of what? Raul654 (talk) 15:20, 4 May 2008 (UTC)[reply]
- I'll try and think of something and let you know. =Nichalp «Talk»= 18:50, 6 May 2008 (UTC)[reply]
- Try and provide some background context for terms such as "race condition" etc in the lead. - the lead is there to summarize the topic, not to provide background information on everything discussed therein. Background information goes in the "Background" section. If people want to know what race conditions are, they can read past the lead into the article, or click on the term (because it's linked).
- The above discussion is preserved as an archive. Please do not modify it. No further edits should be made to this page.