Talk:Central processing unit/Archive 2
This is an archive of past discussions about Central processing unit. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 |
Visuals
I'd like to add some more visuals to the new sections I'm writing for modern CPU design considerations. However, I'd like to do so without resorting to block diagrams. No offense to block diagrams, but they just aren't visually arresting nor something that the non-engineer/non-programmer would likely be interested in looking over. Any suggestions on interesting visuals to provide for ILP and TLP? I'm leaning towards just inserting an image of a monster like POWER5 or Sun's UltraSPARC T1 into the article for the latter :) -- uberpenguin 05:33, 11 December 2005 (UTC)
- Tried adapting the style of diagram Jon Stokes (Hannibal) uses for this purpose? -- Wayne Hardman 16:10, 23 January 2006 (UTC)
To add, or not?
I've been toying with the idea of including some sort of discussion of CPU cache design and methodology as well as some blurb about RISC vs CISC. However, I keep coming back to a couple of major mental blocks. First, the article is already fairly lengthy, and I'm afraid to add any more major sections for risk of making it too all-inclusive. Second, I want to keep the article as close as possible to a discussion of STRICTLY CPUs (not computers, not memory, not peripherals, not software), and I somewhat feel that RISC vs CISC is an ISA topic first and foremost and should be covered in discussions of ISA design, not CPU design. Finally, the section discussing the motivations for and function of superscalar architecture does very briefly touch on why CPU caches are necessary for very deeply pipelines, so I'm tending to believe that a lengthy diversion into cache methodology would be overspecific and largely detract from the flow of the article. Input is appreciated on any or all of these points! -- uberpenguin 06:16, 11 December 2005 (UTC)
- The cache subject really belongs in the memory arena than in the CPU area; cache is a memory function despite the fact it is coupled to the cpu design and often is on the same chip as the cpu core. Thus it doesn't seem like it should have more than a mention in passing in this article. RISC/CISC is really an ISA issue. It certainly affects the details of a cpu design (and vice versa), it is a more detailed and somewhat separate and divorced subject. As you say, the article is already large, and thus shouldn't be expanded with these topics IMHO. Also, note Wikipedia has other articles which can and do treat these. -R. S. Shaw 01:08, 13 December 2005 (UTC)
- I'd like to point out that CPU cache is already a FA. MarSch 14:06, 5 March 2006 (UTC)
- Yeah, I have long since dropped the idea of adding much of a discussion of cache here. -- uberpenguin 16:00, 5 March 2006 (UTC)
CPU Clustering
Hey,
I'm thinking that something on clustering should be added to this article? What do guys think? --ZeWrestler Talk 16:56, 12 December 2005 (UTC)
- The article already touches on SMP and NUMA as TLP strategies. Cluster computing is a systems-level design approach and really has nothing to do with CPU design at all. -- uberpenguin 20:02, 12 December 2005 (UTC)
- Ok, thanks. --ZeWrestler Talk 16:45, 13 December 2005 (UTC)
Nitpicking
A possibly overly pedantic remark: The article says "The first step, fetch, involves retrieving an instruction (which is a number or sequence of numbers) from program memory.". I would actually say that the instruction is represented by a number rather than that it is a number. That is, the instruction is a conceptual entity and not a concrete one. When for example a processor manual refers to the ADD instruction we think of it as something other than a number. Or would it be confusing to make this distinction? -- Grahn 03:39, 15 December 2005 (UTC)
- I can't see how changing the phrasing from 'which is a...' to 'which is represented by...' could cause more confusion. However, while I do agree with you that an instruction is more conceptual than concrete, I think that for all intents and purposes an instruction IS a number in almost every major device that could be called a CPU. How does one gauge binary compatibility of two CPUs? Is it not whether the instructions (numbers) used in software can be interpreted successfully for the same end result by both? One could hardly argue that two CPUs are binary/ISA compatible if they used the exact same (conceptual) instructions but used different numbered opcodes, a different signed representation, a different FP system, etc.
- On the other hand, as I said earlier, your suggested phrasing really doesn't make the text more confusing, so I'll modify it per your suggestion. -- uberpenguin 04:25, 15 December 2005 (UTC)
I think it would be good if there where examples of where elements of the Harvard architecture are commonly seen as well. I am not quite sure where this is the case except maybe in the segments in the intel 80n86 processors. And in embeded systems where the code is ROM and the Data is in RAM (but I am not sure that this counts and the RAM and ROM share the same address bus). Gam3 05:00, 27 December 2005 (UTC)
- Superscalar cache organization and some SIMD designs, mostly. This article isn't about cache (as discussed above), so the former is out. I thought the latter was a bit too specific to cover in much detail, but I didn't want to altogether ignore it; thus the single brief sentence hinting that Harvard still pops its head up from time to time. -- uberpenguin 22:51, 30 December 2005 (UTC)
Integrated circuit CPUs between discrete transistor CPUs and microprocessors
There wasn't a transition directly from discrete transistor CPUs to microprocessors; CPUs were built from SSL, MSI, and LSI integrated circuits before the first microprocessors. I'd rename "discrete transistor CPUs" to "discrete transistor and integrated circuit CPUs", or something such as that, and mention that the discrete transistors were replaced by IC's; I'd argue that transition wasn't as significant as the transition to microprocessors, so I'm not sure I'd give it a section of its own. Guy Harris 09:23, 25 December 2005 (UTC)
- True true... I got kinda lazy and omitted this for brevity. I'll try to work it in somehow. -- uberpenguin
- Okay I reworked that section, and I think it's a lot more accurate and understandable now. Let me know what you think. -- uberpenguin 23:48, 25 December 2005 (UTC)
You also skipped the bitslice technology that the late 70 and later minicomputers used. While the chip fabrication technology could support only very simple complete CPU's, similar technology was used to create multi chip CPU's of much higher power. CF AMD 2900 series devices.--Shoka 17:53, 4 March 2006 (UTC)
- I'm pretty sure I mentioned multi-IC CPUs very briefly. I'm not sure that there is a whole lot to be said about them from an architectural standpoint, though. -- uberpenguin 16:04, 5 March 2006 (UTC)
new inline citation
check this out, might make our lives slightly easier. in particular with citing sources. --ZeWrestler Talk 17:28, 31 December 2005 (UTC)
64-bit arithmetic on 32-bit processors, etc.
"Arbitrary-precision arithmetic" generally refers to bignums, and the arbitrary-precision arithmetic page does so as well; that's not, for example, how 32-bit C "long"s were done on 16-bit processors, and it's not how 64-bit "long longs" are done on 32-bit processors, so that really doesn't fit in "arbitrary-precision arithmetic". For example, it only takes two instructions on PPC32 to add 2 64-bit values, each stored in two registers (addc followed by adde on PPC32), and it only takes two instructions to load the two halves into registers - the equivalent ops are equally cheap on x86 - and it's typically done in-line, so it's arguably misleading to refer only to bignums when discussing that case. Guy Harris 22:51, 12 January 2006 (UTC)
- True, I just didn't want either the footnote or the text to get too bulky on a side point... Several folks who prowl FAC are pretty ardently against great elaborations that could conceivably be moved to another article. I'll change the text a bit to mention hardware big int support. --uberpenguin 23:12, 12 January 2006 (UTC)
Square wave pic
I find the picture labeled "Approximation of the square wave..." slightly misplaced, the subject of the article isn´t at all related to fourier series, and cpu clocksignals are generated by other means. I suggest that it be removed or replaced, and it would be great if someone would bother to draw one that shows rising and falling edges (would aid understanding of DDR memory for instance).
Daniel Böling 14:07, 25 January 2006 (UTC)
- Agree totally. It was always a bit out of place; I was grasping for straws to find a picture for that section, had that extra one laying around and stuck it in "temporarily." What would really be neat/appropriate there is a picture of a logic analyzer hooked up to some small microprocessor like a Z80... If nobody else has something like this I'll see if I can set it up sometime soon. -- uberpenguin 22:52, 29 January 2006 (UTC)
- Ehh... Well the logic analyzer with the counter is an improvement for now. I'll see if I can find or make something better in the future. -- uberpenguin 02:48, 3 March 2006 (UTC)
fundamental concepts
lead sentence
shldn't it be "the component in a digital computer that interprets instructions contained in software and processes data."? Doldrums 07:57, 4 March 2006 (UTC)
- Well, the current sentence isn't incorrect since software contains both instructions and data. And when other data is loaded from disk into RAM it can be seen as becoming part of the software. Redquark 11:00, 4 March 2006 (UTC)
- Thanks to the VN architecture, there's usually little differentiation between data storage and instruction storage areas (except in some limited cases like superscalar cache). Therefore you see whole classes of instructions (like the "immediate" instructions) that contain both an operational message as well as data, which are read simultaneously upon execution. I think the original phrasing is very much correct from a literal standpoint. However, conceptually and perhaps theoretically, it's better to think in the terms you mention. Plus the extra word really doesn't harm anything, so it might as well stay... -- uberpenguin
@ 2006-05-19 14:51Z
- Thanks to the VN architecture, there's usually little differentiation between data storage and instruction storage areas (except in some limited cases like superscalar cache). Therefore you see whole classes of instructions (like the "immediate" instructions) that contain both an operational message as well as data, which are read simultaneously upon execution. I think the original phrasing is very much correct from a literal standpoint. However, conceptually and perhaps theoretically, it's better to think in the terms you mention. Plus the extra word really doesn't harm anything, so it might as well stay... -- uberpenguin
Comments on changes
I have largely reverted the changes to bolding and image spacing. It baffles me why someone would feel that stacking images on top of each other (thereby removing their correct contextual position in the text) on the right side is more pleasing than having the thumbnails in the correct context and balanced on both sides. I've therefore reverted back to my original image layout; if you have problems with that please discuss them here before changing anything.
On the subject of bolding which has been discussed before in this article's FA nomination: the reason certain terms are bolded is because they are key to understanding subsequent text and are often unique or have unique meaning to the computer architecture field. I concede that some terms were unneccessarily bolded, so I've cut down the bolding to what I believe to be fundamental terms in the article.
I also removed the out of place blurb about Mauchly and the Atanasoff-Berry Computer that was added by an anonymous user. The text had absolutely no relevance to this article and I'm surprised it wasn't removed by somebody else. -- uberpenguin 14:38, 4 March 2006 (UTC)
- I disgaree heartily with the image reversions. While some people (like me) prefer to right justify, and others prefer to scatter, is up to argument. However, no image should be 350px. The usual on articles now, particulalry featured articles, is 250px. Páll (Die pienk olifant) 15:08, 4 March 2006 (UTC)
- The norm for pictures is |thumb| which sets the image size as per user preferances (but defaults to the rather small 180px wide), and the norm for diagrams is whatever size they're readable at. However, as long as the page works at 800x600 (which this does currently), it's whatever is appropriate for the page. This page is has much more text:image than Zion National Park, makes sense to have them a bit larger. Wouldn't hurt setting most of them to user-defined size though, the assumption is always that people will click on the pic if they want to see the detail. --zippedmartin 17:06, 4 March 2006 (UTC)
- While I do respect your position, per zippedmartin's comments, could we possibly defer to my preference here? Where I'm not going against current WP recommended styling I'd just rather have it "my way" in an article I wrote. I know it's selfish motive, but it's just more pleasing for me to see it this way and the issue is little more than editor preference. -- uberpenguin 04:43, 5 March 2006 (UTC)
heat sinks
The link to heat sink was removed from See also, with the comment "heat sinks have nothing to do with a discussion of CPUs directly." This statement is contrary to the following: "Heat sinks are widely used in electronics, and have become almost essential to modern central processing units. ... Due to recent technological developments and public interest, the market for commercial heat sink cooling for CPUs has reached an all time high" (from heat sink) Shawnc 21:43, 4 March 2006 (UTC)
- I stand by my statement. You can make the argument that a discussion of thermal dissipation is key to nearly any significant engineering (and especially electrical engineering) scheme, but as it stands a CPU is a functional device, not necessarily a physical implementation. Thermal issues could justifiably be discussed in an article about, say, integrated circuits, but not here in an article about a functional device with many forms of implementation. "See also" could easily get out of hand if we included every article that could be logically connected to CPUs and their various incarnations and implementations. As it is, it should keep to topics that very directly relate to CPUs in the architectural sense. -- uberpenguin 02:44, 5 March 2006 (UTC)
- Also, upon reading the heat sink article, I think it's deplorable state makes it a bad example to use here. If a lay man read that they might think that microelectronics are the singular application of thermal management devices. -- uberpenguin 04:47, 5 March 2006 (UTC)
- In other words, "already covered in CPU cooling." Alright. Shawnc 03:24, 6 March 2006 (UTC)
- That too! -- uberpenguin 05:26, 6 March 2006 (UTC)
- I did some substantial work on the thermal grease article the other week. I have a few ideas and will see what I can do to further these secondary articles along. Thermal dissipation is a way of life here in the Phoenix Valley! ;-) -- Charles Gaudette 19:35, 4 June 2006 (UTC)
digital is not base-2
digital does not mean base-2, that is what binary means. A bit is a binary digit, which is not a pleonasm. MarSch 13:54, 5 March 2006 (UTC)
- Yes, I'm fully aware of that. Unfortunately there is an anonymous editor that is hellbent on including information about the Atanasoff-Berry Computer in this and other computer related articles. He seems to believe that my removing the text as irrelevant is an attempt to obscure history, but ignores the fact that this article is about CPUs, not early non-stored program computers. I've removed the poorly written and off topic text that he added (again). If/when you see him re-add the text, feel free to remove it as nonsense. -- uberpenguin 15:53, 5 March 2006 (UTC)
- Just in case I didn't make it clear; the text that you took issue with was added by the anon editor, not myself. Where I mention bits in the "integer precision" section I was careful to indicate that they are related to binary CPUs only. -- uberpenguin 16:08, 5 March 2006 (UTC)
- Thanks for removing it. I wasn't looking for an addition of a whole section with a picture, so I didn't fix it myself. -MarSch 18:12, 5 March 2006 (UTC)
- As we all know ALL non binary computers ware failures, as we also know ALL computers nowadays use binary sytem like ABC did first !!! As many of you don't know ENIAC/EDIVAC were 10 bit systems. And Industrilized version(derivate) of the ABC as the US court concluded. That Anonymous user, 5 March 2006. —Preceding unsigned comment added by 71.99.137.20 (talk • contribs)
- Actually, ENIAC used a word length of 10 decimal digits, not 10 bits (binary digits). Its total storage was twenty of them.[1] -- Gnetwerker 17:45, 6 March 2006 (UTC)
- Did you just said "decimal" ??? 71.99.137.20 17:54, 6 March 2006 (UTC)
- Yes he certainly did. ENIAC was a digital, decimal computer. Digital does NOT imply binary as you claim; that's one of the reasons your added text keeps getting reverted. Digital simply means finite state (as opposed to infinite state - analog), binary is a base-2 numeral system. Claiming they are one in the same is a fairly severe confusion of terms. -- uberpenguin 18:18, 6 March 2006 (UTC)
- Additionally, the claim that "ALL non binary computers were failures" is absolutely ridiculous. ENIAC provided reliable service for BRL for over ten years, being upgraded significantly a few times (including one that made it a stored program computer). IBM, Burroughs, and UNIVAC all built several commercially successful computers that used digital decimal arithmetic. UNIVAC I, which is nearly universally considered the first highly commercially successful computer, used BCD arithmetic. Unless you, counter to nearly all computer historians, consider UNIVAC I a failure, then you probably won't wonder at why UNIVAC II and UNIVAC III also supported BCD.
- I'm really failing to see the point you are trying to make here. Nobody is arguing that the ABC wasn't influential or wasn't the first digital electronic computer. However, it wasn't stored program and is thus irrelevant to this article. CPUs are Von Neumann machines, so the American history of CPUs really starts somewhere between ENIAC (which was converted to stored-program) and EDVAC (which was stored program by design). -- uberpenguin 18:31, 6 March 2006 (UTC)
- Another example of a successful non-binary computer was the IBM 1620, which was (in its time) a minicomputer specifically intended for scientific calculation. This did all its arithmetic in decimal using lookup tables for multiplication and addition. The default precision for REAL numbers was 8 significant figures, but this could be varied up to 30 figures. Murray Langton 22:02, 7 March 2006 (UTC)
stored-program computer - mombo jumbo
After John Atanasoff beeng called by NAVI and making NOL computer project (on advice of John von Neumann) and secretly ENIAC was beeing build while John Mauchly was somehow participating in both projects without the knoledge of John Atanasoff. And somehow the NOL project was shut down again on the advice of John von Neumann. And after finaly John Mauchly admiting in court that he for instance had to take "crash course in electronics" shortly after beeing introduced ABC basics. After the trail John von Neumann used the "stored-program computer" noncence like it's the verry key to computers. The truth it's not even close to any of the Atanasoff finding which he made just in making one version of his computer. Which are those findings : the use of binary base, logical operators instead of counters by using vacuum tubes (transistor), refreshed memory using capacitors, separation of memory and computing functions, parallel processing , and system clock. 71.99.137.20 17:47, 6 March 2006 (UTC)
- And I put the question to you again: what the heck does this have to do with stored-program CPUs? All devices ever called CPUs were Von Neumann/Harvard machines. I'll let you deal with your weird idea that the stored program concept wasn't a huge milestone in computer development, but the fact is that CPUs are stored program machines. -- uberpenguin 18:39, 6 March 2006 (UTC)
- It's very possible that Mauchly got his ideas on how to do arithmetic from Atanasoff - and that seems to be pretty much what the judge said in the court case - it's not good that he stole ideas and infringed patents. However, no amount of court rulings change the fact that the ABC wasn't a stored program device and ENIAC (eventually) was. Von Neumann certainly did say that the stored program thing is the very key to computers - and he was absolutely 100% correct. If you have to step the machine through the algorithm you want it to perform by hand (as was the case with the ABC) then it's completely analogous to a non-programmable pocket calculator. If you had enough memory and enough time and the right peripherals, the ENIAC could have run Windows 2000, balanced your checkbook and played chess (and any of the other amazing things computers can do). The ABC could no more have done those things than could an abacus or a pile of rocks. That ability to run a program is what makes a computer different from a calculator. If User:71.99.137.20 doesn't/can't/won't understand that then he/she doesn't understand computers AT ALL. You can imagine a computer with just 'nand' 'shift-right', 'store', 'literal' and 'jump if not carry' as it's underlying instruction set and 'numbers' that are only 1 bit wide! Amazingly, such a machine is Turing-complete and can therefore (according to the Church/Turing theorem) be used to implement any arithmetic or logic function in software. So here we see something that you HAVE to call a computer because it can (in principle) run Windows, balance your checkbook and play chess - who's hardware can't increment or decrement - let alone add or subtract - and which can only represent the numbers as high as 1 and as low as 0! This whole programmability thing isn't just a part of what makes up a computer - it's the ENTIRE thing that is a computer. Computers with things like adders and multipliers and 32 bit floating point only need them to get higher performance. SteveBaker 05:54, 10 March 2006 (UTC)
Proposed Additions
To make a good article better, may be the following additions will be useful:Connection 12:11, 7 March 2006 (UTC)
- In a Conceptual Background Section, a word about Von Newman Architecture, and a link thereof. This Conceptual Background may also solve the Atanasoff-Berry Computer issue. :) Connection 12:11, 7 March 2006 (UTC)
- A State-of-the-Market Section. It discusses Processor Families: especially the 8088, and its developments, design solutions (more circuits per space v. architecture changes). Or a link to Notable CPU architectures, and adding this Section there).Connection 12:11, 7 March 2006 (UTC)
- Erm... Well, the history section already does briefly explain the significance of the stored program computer and Von Neumann's ideas and designs. I'm not sure what else you are suggesting we add. I don't really think a state of the market section is necessary for several reasons:
- It's very difficult to create such a thing and make it sufficiently terse without someone screaming POV.
- The Notable CPU architectures page was largely created to avoid this problem, and is currently linked in See also.
- I strongly believe this article should stick to the history, evolution, and fundamental operation of CPUs as functional devices. It should avoid elaborating on specific architectures unless their mention is useful to illustrate a certain concept. Architecture history is sufficiently covered in the specific articles on those architectures.
- Anyway, let me know what you think, I'd like some clarification on your first point. -- uberpenguin 15:32, 7 March 2006 (UTC)
- --
- CPU Architecture is a unique development where many people has contributed to it expressly or otherwise! This aspect I want to be stressed as a Design "lesson". In the History section, focus has been on implementation. I want a stress on Architecture, and how it came along. What each has contributed. Von Newman is not mentioned at all. All this needs to be presented in context. Also, what contributions didn't "show" in the main development path. Here comes other non-Von Newman Architectures. All this can only be links to other wiki Articles.
- On the other point, what prompted my sugestion is that I ddn't see the 8088, 368, 468, etc, in Notable CPU architectures. However you are right. They should be covered there. My other point should be placed there. ;) --Connection 09:32, 8 March 2006 (UTC)
- The 8088, 80386, 80486 aren't instruction set architectures, they're microprocessors that implement instruction set architectures. The instruction set architecture they implement could be called x86 or, in the case of the 80386 and later processors, IA-32; they are mentioned on the Notable CPU architectures page. Guy Harris 09:51, 8 March 2006 (UTC)
- Mi Culpa. I didn't see the IA-32 or x86 Links, as I was searching 8088, 80386, 80486! ... Who said they are instruction set architectures? --Connection 11:28, 8 March 2006 (UTC)
- The line between CPU architecture and implementation did not really exist until the S/360 (the article notes this). Up until that point, most significant traits you could enumerate about CPU "architecture" were merely implementation details. Therefore, a discussion of early computers is necessarily mostly about implementation. The history section is fairly short because this article cannot cover a lot of the ground covered by history of computing hardware; we're only really concerned with the development of stored program computers (Von Neumann IS mentioned, re-read the history section... I was merely hesitant to overtly label him "the inventor of the stored program architecture" because that's not wholly true.).
- I'm still not entirely sure what you want to include, but you're welcome to go ahead and write it here or in the article so we can see what you had in mind. Just try not to cover a lot of ground already covered by articles like CPU design (implementation) and Instruction set (ISA). -- uberpenguin 14:21, 8 March 2006 (UTC)
- What I have in mind is a set minor touches to connect things together. I will add them directly in the future. Over and out. --Connection 21:22, 8 March 2006 (UTC)
"integer precision"?
Surely "integer range" is meant. Where would I find any variation in precision of integer units between processors? --ToobMug 08:51, 31 March 2006 (UTC)
- Heheheh! Yeah - you're 100% correct. A lot of people misuse the term. SteveBaker 13:09, 31 March 2006 (UTC)
- All better now! SteveBaker 13:15, 31 March 2006 (UTC)
- Precision is the correct term to use here. Dictionary definitions:
- All better now! SteveBaker 13:15, 31 March 2006 (UTC)
- 'the accuracy (as in binary or decimal places) with which a number can be represented usually expressed in terms of the number of computer words available for representation'.
- 'The number of decimal places to which a number is computed.'
- Precision isn't being used in the strictly scientific sense here, but it is common to see the term used in relation to digital microelectronics. -- uberpenguin
@ 2006-03-31 13:54Z
- Hmmm - maybe there is a rift in hacker culture here. I've been in the business 30 years and I wouldn't think of talking about the precision of an integer. The number of 'decimal places' is the number of digits after the decimal point - and that's zero for an integer. The number of 'significant digits' however depends on the size or range of the storage allocated to the integer. If the usage elsewhere is different, I'm suprised - but I guess anything is possible. The trouble with using 'precision' when you mean 'range' is that 'precision' loses it's meaning when applied to (for example) fixed point arithmetic. No matter what - I think the article should use terms that are (at worst) less ambiguous and (at best) not incorrect. 'Range' and 'Size' express the meaning perfectly well. Precision is certainly not acceptable to everyone. SteveBaker 17:17, 31 March 2006 (UTC)
- Personally I hate the imprecise (hehe) word 'size', but it's not a really big deal, so I'll leave the changes. -- uberpenguin
@ 2006-04-01 00:44Z
- Personally I hate the imprecise (hehe) word 'size', but it's not a really big deal, so I'll leave the changes. -- uberpenguin
- Hmmm - maybe there is a rift in hacker culture here. I've been in the business 30 years and I wouldn't think of talking about the precision of an integer. The number of 'decimal places' is the number of digits after the decimal point - and that's zero for an integer. The number of 'significant digits' however depends on the size or range of the storage allocated to the integer. If the usage elsewhere is different, I'm suprised - but I guess anything is possible. The trouble with using 'precision' when you mean 'range' is that 'precision' loses it's meaning when applied to (for example) fixed point arithmetic. No matter what - I think the article should use terms that are (at worst) less ambiguous and (at best) not incorrect. 'Range' and 'Size' express the meaning perfectly well. Precision is certainly not acceptable to everyone. SteveBaker 17:17, 31 March 2006 (UTC)
- Precision isn't being used in the strictly scientific sense here, but it is common to see the term used in relation to digital microelectronics. -- uberpenguin
CCIE
who has done CCIE here ?
--It fayyaz@hotmail.com 17:34, 12 April 2006 (UTC)
Merged
- Processor was merged into Central processing unit (this article). That history now exists at Talk:Central processing unit/Processor article history.--Commander Keane 07:36, 14 January 2006 (UTC)
renn
List of CPU flags?
Is there a list somewhere (in Wikipedia or not) of common CPU flags (like SSE, APIC, MMX, 3dnow...) and what they mean? (Other than the one I started building at User:Dcljr/Sandbox#List of CPU flags, of course.) - dcljr (talk) 17:31, 23 May 2006 (UTC)
- Uhh... For what architecture? Try reading the programmer's manual for whatever ISA you're using (looks like late x86 to me). -- uberpenguin
@ 2006-05-23 18:36Z
- That list is a mixed bag of terms, some of which refer to features (such as MMX, SSE, 3DNow!), some of which refer to components (such as APIC and MTRR), some of which refer to I/O buses (such as MCA and VME), some of which refer to instructions (such as SYSCALL) - there's really nothing they all have in common other than being computer hardware terms (and some of them might not even be computer hardware terms - "DE" goes to a disambiguation page, and the only computer term there is "Desktop Environment"). Guy Harris 21:26, 23 May 2006 (UTC)
- Yeah, it's a mixed bag exactly because I don't know what they all mean! <g> Anyway, most of these are from the "flags" entry of "cpuinfo" in Linux (see, for example, [2]). - dcljr (talk) 19:31, 24 May 2006 (UTC)
- The ones from "cpuinfo" are probably the flags you get from the x86 CPUID instruction, listing the capabilities of the processor. Lists of them can be found in the Intel and AMD documentation of the x86/IA-32 (including EM64T) and AMD64 instruction sets; see, for example, the description of the CPUID instruction in IA-32 Intel® Architecture Software Developer’s Manual Volume 2A: Instruction Set Reference, A-M. "MCA" and "VME", unfortunately, are ambiguous; they can refer to the MCA and VME buses, which are system I/O buses and are characteristics of the system as a whole rather than of the processor, or they could refer to "Machine Check Architecture" and "Virtual 8086 Mode Enhancements", which are x86 CPU features and have nothing to do with the MCA or VME buses - the x86 CPU features are the ones reported by "cpuinfo". Guy Harris 19:35, 24 May 2006 (UTC)
- Aha! I just found List of computing and IT abbreviations, which would seem to be the place for this info, but the few terms I've looked for so far haven't been in the list. - dcljr (talk) 06:18, 25 May 2006 (UTC)
- Actually, I think CPUID would be an even better place for this information (please add any details it leaves out). --70.189.77.59 17:54, 27 October 2006 (UTC)
Good article...
Sorry for cluttering the talk page, I just want to say this is a really well-written and informitive article. 128.208.41.109 05:25, 25 May 2006 (UTC)
- I'd like to second this comment, I can't fault it!! —Preceding unsigned comment added by 92.41.217.152 (talk) 17:47, 14 November 2008 (UTC)
Peformance comparison of common processors
It would be useful to read a comparison of the performances of common processors (either in a table in the article, or an external link).
- I don't care for the idea. Performance comparisons are very involved and detailed discussions, are often quite subjective and subject to testing bias, and would bulk up the article significantly. Furthermore, a fair sample of performance comparisons from the entire history of CPUs could be very difficult to contain in a table. That being said, I think it might be an interesting and relevant factoid to include a short sample comparison of, say, the integer performance of a very early von Neumann computer to a modern microprocessor. If you want to collect some figures on modern microprocessors, I can look up the data for old computers. -- mattb
@ 2006-09-14T04:41Z
- I think there is room for such a table *somewhere* in Wikipedia, but I agree central processing unit is not the place. Please stick that information in benchmark (computing) until we find a better place? --70.189.77.59 17:54, 27 October 2006 (UTC)
- Check out iCOMP and PR rating. BebopBob 01:56, 21 December 2006 (UTC)
manufacturers of central processing units?
Who are the main manufacturers of CPUs and how is performance measured
- 1. Too many to briefly enumerate. 2. In too many ways to briefly enumerate. Reading CPU design may provide a little enlightenment. -- mattb
@ 2006-10-22T02:16Z
- I like the CPU design article. It mentions As of 2004, only four companies are actively designing and fabricating state of the art general purpose computing CPU chips. So -- are 4 too many to enumerate? Or do we need to update that article to include those other companies you are thinking about?
- Those are excellent questions. Please help us improve Wikipedia to make the answers better and to make the answers easier to find. Many companies that design CPUs are Fabless semiconductor company, who pass their design over to some other Semiconductor companies, which actally fabricate the CPU. CPU performance is measured using benchmarks -- but be aware that there is no one "performance" number -- one CPU may perform one benchmark faster, but some other CPU may perform your actual application faster. Do we need to mention/link to these things in the article? --70.189.77.59 17:54, 27 October 2006 (UTC)
I think it would be nifty if this article mentioned the "top" 8 or so CPU design companies and fabrication companies, in terms of the number of CPUs shipped.
I see that
- ARM Limited says "In 2007, 2.9 billion chips based on an ARM design were manufactured."
- PIC microcontroller says "Microchip recently announced the shipment of its five billionth PIC processor."
- 68k says "CPU32 and ColdFire microcontrollers have been manufactured in the millions ..."
- Data General Nova says "eventually 50,000 units were sold."
- MIPS_architecture says "... By ... 1997 the 48-millionth MIPS-based CPU shipped, making it the first RISC CPU to outship the famous 68k family." —Preceding unsigned comment added by 68.0.124.33 (talk) 16:27, 15 March 2008 (UTC)
- Advanced Micro Devices ... how many?
- Intel Corporation ... how many?
- Microprocessor says "About 55% of all CPUs sold in the world are 8-bit microcontrollers. Over 2 billion 8-bit microcontrollers were sold in 1997."
Would such a table (with a few more numbers filled in) be appropriate for this CPU article? --68.0.124.33 (talk) 02:29, 14 February 2008 (UTC)
Different CPU models?
What is physically different between similar processors, eg. one that is 2 ghz, and one that is 3 ghz?
- There may be absolutely nothing physically different about them, or they may be totally different. You'd need more information than just clock speed to say. -- mattb
@ 2006-10-19T19:03Z
- Say they were the exact same model, and only the clock speed is different? Would only the multipilier be changed in the hardware?
- Well, in some form or another the global clock signal will have its period decreased. Whether that means changing a clock multiplier or the target frequency of some oscillator is application-specific. -- mattb
@ 2006-10-22T02:13Z
- Well, in some form or another the global clock signal will have its period decreased. Whether that means changing a clock multiplier or the target frequency of some oscillator is application-specific. -- mattb
- Certainly one *can* run a 3 GHz processor at 2 GHz, so there is not necessarily a physical difference.
- However, most 2 GHz processors cannot run at 3 GHz -- there *is* a physical difference.
- So, from least amount of difference to most difference, we have:
- only the external clock speed is different -- no internal difference.
- only the internal multipler is different.
- They were manufactured the same way, with the same photomask, but chip-to-chip and wafer-to-wafer variations in blurriness and defects made some chips fail to run at the designed speed ("weak transistors"), but the chips passed the test at a slower speed.
- They were manufactured according to the same layout, but a photomask shrink produced smaller, faster transistors and shorter, faster wires.
- The layout was tweaked slightly ("transistor sizing", "strengthening transistors") to improve the critical path.
- the chip was completely re-designed (deeper pipelines, improved clocking tree, smaller cache, etc.) to shorten the critical path.
- Does that answer your question?
- (Should I move this to the CPU design article?) --70.189.77.59 17:54, 27 October 2006 (UTC)
Link inappropriate?
The external link to cpu-collection.de, a documentary website about the history of microprocessors, has been removed for being "inappropriate". In an article about Central Processing Unit - isn't this link helpful and therefor appropriate? What do you think? —The preceding unsigned comment was added by Morkork (talk • contribs) .
- It was removed partially because you were adding it to several articles without first discussing its inclusion. That is generally viewed as WP:SPAM, especially if you have something to do with the website you are adding. I'd say that link may be appropriate for microprocessor (ask on the talk page), but not this article. -- mattb
@ 2006-11-08T01:25Z
Why???
Why does this get vandalized so much, anyway??? What makes this a popular target??? 170.215.83.212 07:44, 17 November 2006 (UTC)
- You got me... I guess it's just people trying to mess up a good thing. -- mattb
@ 2006-11-17T16:07Z
All people who have vandalized this page should be IP blocked. Especially the monumental asshole with IP 82.42.146.28 who wanted to destroy the whole article! —Preceding unsigned comment added by 132.203.109.196 (talk) 23:58, 21 September 2007 (UTC)
Moving some content from CPU Design?
I created a new section called Markets in the CPU design article. I think that section probably belongs here. What do people think?
Even more important, the CPU Design article has a list of micro-architectural concepts, similar but not exactly the same as the Design and Implementation section of this article. This article doesn't mention RISC nor cache, for example. All of these ought to collected in one place. I don't have an opinion on where this micro-architectural stuff goes - either here or CPU design. Opinions?
Previously, it was discussed that the CPU design article should concentrate on the actual implementation task of designing a CPU. I agree with that. Things like micro-architecture and markets straddle the line between architecture and implementation so its difficult to figure out where this content belongs. Dyl 17:03, 18 February 2007 (UTC)
- I think the content you added should rightly stay in the CPU design article. This is an overview article and it is far more appropriate for it to describe the operation of CPUs at a high level rather than muck around with a lot (and there is a LOT to be said) of detail. This article actually does mention caches, but it defers to the article on cache rather than taking a lot of space to explain it here. -- mattb
@ 2007-02-19T14:44Z
- Ah, I see the overview aspect now. Previously, I did not see it as an overview as it is poorly titled. The titles of the sections mention the names ILP and TLP but don't explicitly state what they are, instead they name specific sub-topics. I'm renaming the sections to show they are different types of parallelism. I believe that's the aspect the overview is discussing. --Dyl 08:40, 20 February 2007 (UTC)
- That's fine, but the new names you've given to the headings are a bit longer than preferable. Could we shorten them a bit? How about just "Instruction level parallelism", "Thread level parallelism", and "Data parallelism". -- mattb
@ 2007-02-20T15:11Z
- That's fine, but the new names you've given to the headings are a bit longer than preferable. Could we shorten them a bit? How about just "Instruction level parallelism", "Thread level parallelism", and "Data parallelism". -- mattb
CPU Operation
I dono what happened to the CPU Operation section, but it's filled with example pictures and crap text now. 24.7.201.100 19:56, 18 February 2007 (UTC)
- That was probably vandalism you saw that has since been removed. -- mattb
@ 2007-02-19T14:45Z
Something of wich wikipedians go crazy about,
pointless question. I'm thinking about buying a computer(in pieces) and assembling it, now if something(ArmA) requires a 3.0 GHz prosessor can I use a 2.4?
- RAM 2GB 800GHZ
- HD 500
- GeForce 8800GTX (752) —The preceding unsigned comment was added by 84.250.110.93 (talk) 09:02, August 22, 2007 (UTC)
- This is not the place to ask such questions, but Wikipedia:Reference desk/Computing is! This is a lesser known part of Wikipedia where you can ask questions like this. At the Wikipedia:Reference desk you can also ask questions about many other non computer related subjects such as medicine, philosophy and many other subjects. Mahjongg 00:31, 25 October 2007 (UTC)
Missing Pictures all over the place!
The two pictures with Intel chips do not appear. Is it broken? Has it been vandalized? —Preceding unsigned comment added by 132.203.109.196 (talk) 20:09, 15 September 2007 (UTC)
Hello! There are a lot of missing pictures in this article as well as in many other ones related to computer hardware. That would be cool if the people in charge of Wikipedia could fix these things! —Preceding unsigned comment added by 132.203.109.205 (talk) 16:58, 16 September 2007 (UTC)
homebrew CPU article
A homebrew CPU is a central processing unit constructed using a number of simple integrated circuits, usually from the 7400 (TTL) Series. There used to be an article about it at Homebrew CPU, as can be seen in the history of the page [3]. However somebody removed all the articles content, (which was quite interesting) and made a simple redirect to Central processing unit out of the page. I discovered this when I read about the "Magic-1", and decided to create the page "homebrew CPU" by myself, only to find out it already existed as a redirect to "Central processing unit". However, the Central processing unit article itself talks about CPU's built from TTL only in a historical context, and fails to even mention that even today some people are building their own Central processing units from scratch. I really think that "Homebrew CPU" warrants an article all of itself, or failing that at least a cursory mention in this article. Mahjongg 00:44, 25 October 2007 (UTC)
- I don't think the subject belongs in this article (which has plenty to cover as it is), but I might believe it warrants coverage, and that would probably be most appropriate standing alone. I suggest you just restore the pre-redirect version, and then expand and improve it. (Don't homebrewers use FPGAs or something like that now?) -R. S. Shaw 06:08, 25 October 2007 (UTC)
- I personally find homebrew CPUs fascinating.
- If it turns out that homebrew CPUs aren't "notable enough" for Wikipedia,
- perhaps Wikibooks:Microprocessor_Design/Wire_Wrap
- would be a better place to discuss them.
- It currently contains all the text that used to be in the Homebrew CPU article on Wikipedia.
- --68.0.124.33 (talk) 08:39, 21 January 2008 (UTC)
Request for partial protection
This article was vandalized on Monday, November 12th, 2007 at approx. 9:00 a.m. Someone changed various sections of the article from "Central Processing Unit" to "Central Cock Sucking Unit" and committed various other vandalisms. I have tried to fix the article as best I could, but perhaps the article should be reverted to a previous state.
Additionally, I move that this page be partially protected for a while in order to deter the vandal from changing the page again. In addition, I move that the IP of the vandal (or his/her username) be given a temporary ban.
The Ice Inside 14:10, 12 November 2007 (UTC)
in the world of computing raw facts has to be accepted as an input, processed, and used as information for decision making ( out put). fully explain how this process takes place and explian how central processing unit plays a vital role in this process —Preceding unsigned comment added by 41.242.25.134 (talk) 17:19, 22 April 2008 (UTC)
CPU architecture
temporary moved to my User page, because I started this scientific dispute, but I think maybe I should dispute this when I have more knowledge. --Ramu50 (talk) 04:14, 16 June 2008 (UTC)
mechanical cog?
What the heck is a mechanical cog? —Preceding unsigned comment added by 97.117.107.218 (talk) 06:06, 6 February 2009 (UTC)
CPU=processor or core?
There seem to be two acceptions of the term CPU: 1. a processor chip, often containing more than one core 2. a processor core (used in that sense in Multi-core) Which one is correct?--87.162.46.235 (talk) 20:42, 22 March 2009 (UTC)
- "CPU" or Central Processing Unit usually refers to the actual wafer, or physical chip that you would set into the motherboard's socket, regardless of the number of cores contained within. Useight (talk) 21:36, 22 March 2009 (UTC)
FLOPS!
Talk about flops, please. -X —Preceding unsigned comment added by 99.29.63.161 (talk) 04:21, 11 May 2009 (UTC)
Why "Central"?
I mention this as a suggestion to the article's regular editors. The term "central" isn't explained. Its has its origin in a distinction between the "central" and the "peripheral" parts of a computer. for a source see Computer structures (Daniel P. Siewiorek, C. Gordon Bell, Allen Newell) google books. In 2009 we don't refer to a SATA controller as a "peripheral processing unit" but we still have "central processing unit" as a legacy term. patsw (talk) 20:32, 12 June 2009 (UTC)
theres no 'components' section
so im making one... --Xali (talk) 03:00, 20 October 2009 (UTC) nvm no im not theres the components section way at the bottom with links and stuff. its not very evident that its there, and i dont see any Microsequencer there though i dont know if it should be there --Xali (talk) 03:07, 20 October 2009 (UTC)