Talk:PCI Express/Archive 2015
This is an archive of past discussions about PCI Express. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Diagram required..?
I think many people will visit this page in an attempt to identify a connector - either male on card, or female on motherboard. Does this page therefore need a diagram like the one at http://www.orbitmicro.com/company/blog/605 outlining what the slots look like? — Preceding unsigned comment added by Mattwinner (talk • contribs) 11:39, 5 February 2014 (UTC)
- Hello there! Isn't that already provided by the first picture in PCI Express § Form factors section? — Dsimic (talk | contribs) 13:43, 5 February 2014 (UTC)
- Actually, would it also be reasonable to have a diagram giving the physical (maximum/minimum in some) dimensions for these cards - I came here as part of an attempt to try and find out what they are (I need to make a ATX or "full-size" bracket for a second-hand card that I bought which only had a "low-profile" bracket) but was sad to not see such details. From observation I note that the plate seems to be the same size and shape as PCI cards use but I do not know if that is really the case; curiously the low-profile bracket has the slot on the opposite side of the piece of metal that is bent over at 90 degrees at the top that is used with a bolt or clip to retain the card in place. SlySven (talk) 00:13, 13 March 2015 (UTC)
- Well, the minimum card lenghts are mandated by the length of the motherboard slot (and the ×n of the card, of course), while the maximum lenghts (and heights) are limited by what a computer case can accept (see this card, which is part of a PLX Technology PEX 8311 PCI Express to Local Bus RDK, for an unusual example). The mounting brackets are the same, as they go into the same openings on computer cases. Could you, please, explain further what do you mean by the slot being on the opposite side? Perhaps a picture would help. — Dsimic (talk | contribs) 08:41, 17 March 2015 (UTC)
- The card from which you made the link is non-standard, it probably uses a proprietary formfactor. Normal retail PCIe cards have the same formfactor as PCI cards, which also mans that the maximum length of full-length cards is the same as for PCI full-length cards (i.E. 312mm), also the maximum heights of full-height and low-profile cards are the same as for PCI (and of course also the brackets). --MrBurns (talk) 17:22, 18 March 2015 (UTC)
- You're totally right about the maximum leghts and heights for low-profile and full-height cards. By the way, the card I've linked above is pretty much like a regular PCI Express card with a "second floor" attached to it. — Dsimic (talk | contribs) 17:30, 18 March 2015 (UTC)
Transfers per Second
I corrected a mistake in the article in which the author had conflated the concepts of "transfers per second" with "gross bit rate." My change was reverted with the comment, "Sorry, not an improvement, those are speeds for a 16-lane PCI Express slot, and they're 16 times higher than per-lane speeds above)"
The author states that the "speed" for a 16 lane slot is 16 times higher than the "speed" for a one lane slot. He is using the word "speed" to mean gross bit rate. However, the table is labeled "GT/s" (giga-transfers per second"), not gross bit rate. Transfers per second are NOT equivalent to Gross bit rate.
According to Wikipedia article bit_rate,
- "Gross bit rate: In digital communication systems, the physical layer gross bitrate, raw bitrate, data signaling rate, gross data transfer rate or uncoded transmission rate (sometimes written as a variable Rb or fb]) is the total number of physically transferred bits per second over a communication link, including useful data as well as protocol overhead."
This is what the author was referring to when he used the generic term "speed.". He is, of course, correct that a 16 channel bus will transmit at 16 times the raw bit rate of a one channel bus, assuming each channel is using the same bit rate.
However, the table does nor purport to show "Raw Bit Rate.". It is labeled as GT/sec. A Transfer is not a Bit, it is a word. According to Wikipedia article Transfer_(computing)
- In computer technology, transfers per second and its more common derivatives gigatransfers per second (abbreviated GT/s) ... are informal language that refer to the number of operations transferring data that occur in each second...
- These terms alone DO NOT SPECIFY THE BIT RATE at which binary data is being transferred, because THEY DO NOT SPECIFY THE NUMBER OF BITS TRANSFERRED IN EACH TRANSFER OPERATION (known as the channel width or word length). In order to calculate the data transmission rate, one must multiply the transfer rate by the information channel width. For example, a data bus eight-bytes wide (64 bits) by definition transfers eight bytes in each transfer operation; at a transfer rate of 1 GT/s, the data rate would be 8 × 10^9 bytes/s, i.e. 8 GB/s, or approximately 7.45 GiB/s. The bit rate for this example is 64 Gbit/s (8 × 8 × 109 bit/s).
Using the correct definition of transfers per second, the 16 lane bus has the same transfers per second as the one lane bus, BY DEFINITION. If the author wishes to make the point that a 16 lane bus carries more data, he should change the label from GT/s to bps. Alternatively, if he wishes to show the transfer rate in GT/s, then it MUST be the same for the one-lane and 16-lane cases. — Preceding unsigned comment added by Tpkaplan (talk • contribs) 03:50, 11 December 2014 (UTC)
- Fully agreed with the comments above. The article as it stands is seriously flawed. The changes proposed by Tpkaplan should be allowed. — Preceding unsigned comment added by 62.238.71.173 (talk) 12:11, 24 December 2014 (UTC)
- Hello! Sorry for my delayed reponse, and by the way I'm the editor mentioned above. :) In the last day or two I'm going through the PCI Express specifications to have references that would clarify the whole thing. It takes time, though. — Dsimic (talk | contribs) 15:34, 24 December 2014 (UTC)
- Baud and T/s are pretty much the same in that they're used for symbols/second (baud) and state changes/per second (T/s). Commonly, baud is used when non-binary symbols are transferred (e.g. telephone modems, Ethernet) and T/s for binary symbols (e.g. short range interconnects), esp. when serial links are aggregated as in PCIe. -- Zac67 (talk) 16:36, 24 December 2014 (UTC)
- Right, but we should also have 40 GT/s, 80 GT/s, 128 GT/s and 256 GT/s as the symbol rates for ×16 v1.x, v2.x, v3.0 and v4.0 slots, respectively. Why? Because multi-lane slots still transfer data separately over aggregated lanes, distributing the data when it enters a multi-lane channel, and aggregating it back at the destination (some rough kind of an analogy might be RAID 0 in data storage). Thus, we still have separate data transfers over each lane, resulting in summed symbol rates for each of the ×1 lanes. — Dsimic (talk | contribs) 20:41, 24 December 2014 (UTC)
- A transfer as in T/s is about state change and stepping rate of the medium, not about throughput. As stated in the according article, the channel width has no impact on the T/s. Please refer to the PCIe 2.0 base spec where "2.5 GT/s" is used for the previous PCIe 1.1 speed and "5.0 GT/s" for the then newly introduced speed, without any regard to lane width. In short, if you want to show throughput you stick to bit/s, if you want to show the line rate you use T/s. -- Zac67 (talk) 13:40, 25 December 2014 (UTC)
- Hm, sorry but I still disagree; here's an excerpt from the Transfer (computing) article:
- In order to calculate the data transmission rate, one must multiply the transfer rate by the information channel width.
- [... and a bit later...]
- Expanding the width of a channel, for example that between a CPU and a northbridge, increases data throughput without requiring an increase in the channel's operating frequency (measured in transfers per second).
- Of course, that's all fine, but the channel width remains the same in PCI Express as multiple lanes are only aggregated and not somehow merged together into PCI Express links. Moreover, channel width can't go up in PCI Express because of its serial nature, so the lanes need to continue to operate independently and pump the data on their own. As we know, v1.x's 2.5 GT/s, for example, actually means 2,500,000,000 raw bits per second, not "raw" bytes. That simply reflects the serial nature of PCI Express and, as one symbol equals one bit, supports the reasoning that channel width doesn't add up in multi-lane links.
- While flipping through the PCI Express Base Specification Revision 2.1, I've noticed this on page 206, in section 4.2.4.8. Link Data Rate Negotiation:
- All devices are required to start Link initialization using a 2.5 GT/s data rate on each Lane.
- That also applies the data rate in GT/s to each lane, for devices using multiple-lane links. At the same time, that might imply a possibility for the lanes within a link to end up operating at different data rates, at least in theory. Though, the whole thing could be interpreted as having different symbols (as in GT/s) for different numbers of lanes in a link. Thoughts? — Dsimic (talk | contribs) 16:03, 25 December 2014 (UTC)
- Hm, sorry but I still disagree; here's an excerpt from the Transfer (computing) article:
- After being able to obtain a copy of the PCIe 2.1 specs (which sadly aren't free), I'd like to add a few quotes:
- "Link: The collection of two Ports and their interconnecting Lanes." (Terms and Acronyms, p. 30) – a link consists of one or more lanes, working in parallel (a main difference to a parallel connection being that each serial lane carries its own clock and data is reassembled on a slightly higher level)
- Table 2.38 on p. 141 shows "Transmission Latency Guidelines for 2.5 GT/s Mode Operation" for Link operating widths ×1, ×2, ×4, ×8, ×12, ×16, ×32 – obviously, you don't aggregate transfers per sec.
- I guess the main reason for not using Baud(rate) is that each lane is transmitting a symbol per transfer, so the baudrate would aggregate or be at least ambiguous - the transfer rate isn't. --Zac67 (talk) 07:34, 17 March 2015 (UTC)
- After being able to obtain a copy of the PCIe 2.1 specs (which sadly aren't free), I'd like to add a few quotes:
- Thank you for more quotes from the PCI Express specification! Well, that makes sense, so for PCI Express links the baud rate would sum up (symbols per second), while the GT/s rate (transfers per second) would remain the same. The only remaining "gray area" :) is that each lane—as a serial bus—carries its own clock, so there are still rather independent data transfers over each lane in a multi-lane link – if you agree. Also, "2.5 GT/s mode operation" could be technically (or linguistically) different than the overall operation mode for the link as a whole. Just a few more thoughts about the whole thing. — Dsimic (talk | contribs) 14:54, 18 March 2015 (UTC)
- Sorry, I "corrected" this without realizing there was an ongoing debate about the matter. As a reader, I find the current presentation unintuitive; I want to know the maximum throughput I can get with X number of lanes. I don't particularly care whether you call them bits or transfers. Re-stating the T/s metrics for single-lane configurations is not only redundant, but confusing at first. Yes, it makes sense, but I don't want to have to think that hard when I'm trying to figure out how many drives it will take to saturate an 8-lane connection. If multiplying the T/s metric is inaccurate, wouldn't it make more sense to state the throughput in bps? Readers with enough technical knowledge to understand the difference will know that the numbers from the single-lane table are equivalent. Readers without that knowledge will be seeing units they already understand (and are probably more interested in). Plus, we fit more information into a smaller space. Using bps seems like a good compromise: it's technically accurate and not going to confuse those of us who don't work with units like GT/s everyday. —Zenexer [talk] 21:33, 19 March 2015 (UTC)
- Hello! Hm, but in PCI Express § History and revisions section we already have per-lane and 16-lane bandwidths expressed in MB/s or GB/s, and if I'm not mistaken that's exactly the information you're looking for? Article's infobox also includes the same information, in a compacted form. — Dsimic (talk | contribs) 04:22, 20 March 2015 (UTC)
Backward vs. forward compatibility
Take a look at Backward compatibility vs Forward compatibility. You have two situations:
- old card in new motherboard
- new card in old motherboard
Clearly, one of these situations is backward and another is forward compatibility, because the card and motherboard are complementary goods, just like a system and it’s input. I would be willing to call the first situation “backward” because the motherboard in many ways is the system, unless you can convince me otherwise. � (talk) 06:21, 29 April 2015 (UTC)
- Hello! What I've referred to is looking at the whole thing from the perspective of different PCI Express standard versions instead of looking at what plugs into what. That should be more understandable, at least that's how I see it. — Dsimic (talk | contribs) 08:38, 29 April 2015 (UTC)
- I don't think you've understood the concepts of back- and forward compatibility. A V2 card is backward compatible with a V1 slot when it works with the V1 feature set. Otoh, a V1 card plugging into a V2 slot requires backward compatibility of the slot. Forward compatibility is relatively seldom: eg. a PCI 33 MHz card being completely functional in/on a 66 MHz slot/bus but slowing its throughput to what it would be on a 33 MHz bus (skipping every other transfer or similar) would be forward compatible (ie. working with a future version of the native feature set without improving its own features).
- In the case of PCIe, all devices negotiate the highest protocol supported on both sides (just like with Ethernet on twisted pair) – this is backward compatibility at all times, just on different sides. --Zac67 (talk) 11:15, 29 April 2015 (UTC)
- That's a very good point, Zac67. Upon connection, all PCI Express devices negotiate the protocol version, link speed and number of lanes; thus, it's always about backward compatibility. For example, when a 3.0 card is plugged into a 2.0 slot, it's that the card is backward compatible with 2.0, while the slot does pretty much nothing special during the negotiation phase. — Dsimic (talk | contribs) 11:58, 29 April 2015 (UTC)
- Backward compatible seems to be the appropriate term. It is the relationship between the two components and not an individual component. Neither the previous card nor the previous mobo is expected to process the next generation's commands or speeds by ignoring them. Glrx (talk) 19:06, 1 May 2015 (UTC)
Dont believe you - it's becoming a copy left of SCSI
I see the advantage not all devices need all pins since they're cheap and slow: but that is what USB or Serial is for. PCI now has several protocols and now several connectors: i wonder if my old pci cards would still work.
But moreso, despite an attempted at radically different terminology: this looks like it is simply becoming a stolen SCSI II "supporting multiple devices to a cpu at differing speeds full duplex at high speeds". moreso the new SATA ide ever more have the same features that SCSI hard disks have long had: again they use different terminology, but in the end are just copying SCSI designs they are avoiding patents on. Old SCSI devices still work on newer scsi i've done it: i guarantee it. — Preceding unsigned comment added by 72.209.223.190 (talk) 22:35, 16 July 2015 (UTC)
- Sorry, I don't understand what's your point? In general, talk pages should be used to discuss possible article improvements, please see WP:NOTFORUM for further information. — Dsimic (talk | contribs) 21:11, 18 July 2015 (UTC)
I am unable to find any information in the article on what replaces pci express
According to the article on the new artcitecture of nvidia cards it will no longer use pci express though it does not say anything about what it will use instead. could someone find out and update both articles.84.213.45.196 (talk) 00:33, 21 July 2015 (UTC)
- Those are still rumors, and as such don't belong here. — Dsimic (talk | contribs) 10:10, 18 August 2015 (UTC)
- NVLink is supposed to replace PCIe for (Pascal) GPU interconnects only; system interconnects will continue to use PCIe. --Zac67 (talk) 17:14, 18 August 2015 (UTC)
- Hm, but NVLink is supposed to replace PCI Express as the CPU–GPU interconnect, please see this source, for example. — Dsimic (talk | contribs) 22:53, 18 August 2015 (UTC)
- Hmm, that source's 2nd headline is "GPU interconnect for exascale computing" and Nvidia only hints at it becoming more - in concept, it seems very similar to HyperTransport and QPI; possibly we'll see more in a few years. --Zac67 (talk) 07:09, 19 August 2015 (UTC)
- Right, that's why I've called that rumors. :) — Dsimic (talk | contribs) 07:12, 19 August 2015 (UTC)
Table about PCI express connector pinout
The second set of columns on the table is not joined at the bottom of the first set of columns. Why? Is its purpose to reduce length of article?
Luigi.a.cruz (talk) 15:12, 29 October 2015 (UTC)
- Yes, the table is simply folded. --Zac67 (talk) 18:34, 29 October 2015 (UTC)