Talk:TCP offload engine
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||
|
The contents of the Large send offload page were merged into TCP offload engine on 26 September 2021. For the contribution history and old versions of the redirected page, please see its history; for the discussion at that location, see its talk page. |
The contents of the Large receive offload page were merged into TCP offload engine on 26 September 2021. For the contribution history and old versions of the redirected page, please see its history; for the discussion at that location, see its talk page. |
The POV problem with this article
[edit]It seems that there are many valid objections cited for TOEs in the external links. Could someone with more time integrate them into the article? It reads somewhat like an advertisement for this technology in light of these objections. Jesse Viviano 15:43, 15 September 2007 (UTC)
- Agreed. -- intgr [talk] 16:19, 15 September 2007 (UTC)
I disagree this does not sound like an advertisement at all. It is simple and factual, to the point. Perhaps more information related to the TOE technology could be added, that would be a nice addition. However if you research TCP/IP and have experience with statistics of using that protocol you will see that the processor usage claim is valid. Also expansion of the relationship of TCP/IP and PCI architecture could use some expansion. Of course there are also valid objections to using TOE. Perhaps someone should add to the article both sides of the argument and let readers decide for themselves. —Preceding unsigned comment added by 69.207.178.209 (talk) 06:31, 24 October 2007 (UTC)
If one reads the external link regarding why Linux does not support TOE (http://www.linux-foundation.org/en/Net:TOE), you will find that the speed gain claims are not legitimate. Also, keeping the code closed-source is compromising the potential for the Linux community to provide input into making the technology more effective and practical. ~Brent
In case you didn't notice, that blurb on the linux-foundation web site does not cite any sources whatsoever, and is not backed by any research or measurements. Performance advantages are, however, measurable and very real. Trasz (talk) 16:08, 8 January 2009 (UTC)
- "the processor usage claim is valid."
- Actually it's not as valid as you would think it is. This article was written back when 1GbE was a new thing. Since then, operating systems (well, Linux at least) and nearly all Ethernet chipsets have come to support lots of less intrusive offloading techniques: TCP segmentation offload, TX/RC checksum offload, polling RX (NAPI), and large receive offload. With all these, there is a lot you can do with a 2.4GHz CPU. -- intgr [talk] 09:54, 17 November 2007 (UTC)
Trasz (talk) 16:08, 8 January 2009 (UTC) Since then, 10GbE became common, and TOE became important again.
Oh gee, Linux guys complaining and whining about close-source technology. What a surprise. There's no POV problem here - the article is brief, informative, and to-the-point. I'm deleting the POV marker. Scortiaus (talk) 20:12, 20 November 2007 (UTC)
- How about stopping the personal attacks and actually contributing to the discussion? You didn't even try to address any of the claims brought up on this page so you don't really have a case. -- intgr [talk] 21:09, 20 November 2007 (UTC)
I've integrated criticism of TOE into the article, and thus removed the POV tag. Brianski (talk) 05:09, 24 November 2007 (UTC)
I'm going to add a 'criticism' portion instead of 'lack of support in linux' portion. Under the criticism portion there will be a talk of lack of support in linux. -poningru —Preceding unsigned comment added by 159.178.41.60 (talk) 15:00, 9 March 2009 (UTC)
Can not be correct
[edit]- "Originally TCP was designed for unreliable low speed networks (such as early dial-up modems) but with the growth of the Internet in terms of internet backbone transmission speeds (Optical Carrier, gigabit Ethernet and 10 Gigabit Ethernet links) and faster and more reliable access mechanisms (such as Digital Subscriber Line and cable modems) it is frequently used in datacenters and desktop PC environments at speeds over 1 gigabit per second. The TCP software implementations on host systems require extensive computing power. Full duplex gigabit TCP communication using software processing alone is enough to consume more than 80% of a 2.4 GHz Pentium 4 processor (see Freed Up CPU Cycles), resulting in little or no processing resources left for the applications to run on the system."
I initially came across this article and was amazed by this fact. I decided to do a casual test. I moved a 200 GB mysql backup between two Linux servers at work and despite having them connected at 1 GB/s full duplex, I have not been able to push the server past 10% CPU usage. Something felt odd, so I googled a bit and came to find TOE is controversial. I was not therefore surprised when I noticed the same argument on this discussion page.
That being said, I really think we should pull off the above claim. It just make people fail to trust the rest of the article. And no, it has nothing to do with me being a Linux user, its because that statement fail to stand a very simple repeatable test. (Wk muriithi, 8 October 2011)
- You have a point but the 2.4 GHz Pentium 4 is nearly ten years old so it's not a fair comparison. ⫷ SkiSkywalker ⫸ (talk) 15:16, 12 October 2011 (UTC)
- I think the article also fails to mention checksum offloading which has been around for a long time and has already reduced the tcp load significantly. plaisthos (talk) —Preceding undated comment added 13:42, 31 December 2011 (UTC).
- I do not agree with that kind of tests to test this matter. When disk is involved, and the amount of data is so big, there are disks queues, file system speed in flushing to media, etc. I think is better to generate point to point host1@/dev/random to host2@/dev/null and open a TCP channel (i.e. with nc), so no disk involved just the amount of cpu cicles to generate the random data and the network itself. — Preceding unsigned comment added by 195.76.89.2 (talk)
- Certainly don't use /dev/random, it's extremely slow. But anyway, like I stated in another topic, TCP offload engines were relevant in 2004 when this article was first created: networking stacks were not well optimized, 1Gbit Ethernet was state of the art, processors were an order of magnitude slower in many respects and only had a single core; checksum, segmentation and large receive offload were not common in network hardware. And even then the benefits of TOE were suspicious enough that it wasn't considered useful enough to implement in Linux.
- On modern server hardware, saturating 1 Gbps Ethernet using encrypted HTTPS traffic, served from the disk, via an un-tuned HTTP server hardly uses 10% of your CPU time. Yes, I tested it. -- intgr [talk] 17:33, 24 November 2014 (UTC)
Advertising in the Article
[edit]The following paragraph that used to be in the History section reads like an advertisement with its dramatic claims ("unprecedented", "game-changing"), doesn't give any technology details, and has no references. I've thus deleted it. Cjs (talk) 14:20, 4 July 2011 (UTC)
More recently, intilop corporation of Santa Clara, CA has developed the next generation of most advanced TOE Architecture employing patented search engine technology that delivers unprecedented line rate TCP performance with lowest latency. A series of TOE engines that implement full offload which are also customizable by utilizing FPGA technology have been a real boon in all applications requiring low latency and ultra-high performance. It is said to be a game changing technology in Network Acceleration.
Linux information incorrect
[edit]The article says that Linux does not support TCP offloading at all, but my Debian 6.0.2 x86_64 machine running kernel 2.6.32 with a Broadcom NetXtreme BCM5755 supports it and has it enabled by default. 38.99.3.113 (talk) 02:18, 5 December 2011 (UTC)
- No, this is not TCP offload engine (TOE). It is partial TCP offload either on the receive reassembly side (gro, lro) or on the send segmentation side (tso, gso). Or it may be checksum offload (rx, tx). But it is definitely not TCP offload engine! See the ethtool output if you have anything with the name "toe". I will guarantee that you will not! 88.112.67.108 (talk) 13:35, 24 July 2017 (UTC)
Explain purpose of "TOE key" ?
[edit]Some server systems such as made by Dell have an odd little doodad that plugs into the motherboard, known as a "TCP Offload Engine key". There is no explanation of what this key is, or what its purpose is.
I assume it is some sort of authorization device, where you could buy a server without TOE enabled at a lower price, and then "add capability!" by purchasing the authorization later.
If I ever discard a server that uses a TOE key, I'm probably going to cut the key open, photograph the results, and add it as a section of this article.... if no one else does first.
-- DMahalko (talk) 01:48, 30 January 2013 (UTC)
- I don't think it's relevant to this article, but might be relevant at software protection dongle. The TOE key is nothing different from how earlier servers needed an on-board dongle to enable hardware virtualization support, or the BMC and KVM redirection over network. -- intgr [talk] 10:00, 30 January 2013 (UTC)
"Reduction of PCI traffic" section - but PCI is dead
[edit]PCI is dead; long live PCI-E. Is this section mostly obsolete, or do the same problems apply to PCI-E due to it's PCI heritage? 08af9a09 (talk) 12:24, 29 March 2013 (UTC)
- Well, I get the impression that TCP offload cards are just as obsolete as PCI. :) -- intgr [talk] 09:18, 1 April 2013 (UTC)
Outdated
[edit]I'm placing an "outdated" tag on the whole article because the majority of it is based on 2000-ish sources, figures and technology; the 1Hz/1 bps rule is certainly outdated because core counts and instructions per cycle rates have kept increasing despite clock speeds levelling off. Meanwhile NICs and kernels have invented other, less invasive offloading techniques (multiqueue, LRO, LSO, flow steering, etc), not taken into account in the article.
Today, while there are still a handful of TOE devices on the market, they are a niche product and their usefulness is much less clear. To the contrary, there are plenty of stories about optimizing network performance using non-TOE setups, such as from Red Hat [2], Google [3], CloudFlare [4] and researchers http://www.globalcis.org/jcit/ppl/JCIT1787PPL.pdf.* The article should be rewritten from that point of view. -- intgr [talk] 14:49, 24 August 2015 (UTC)
- * Commented out predatory open access journal. Guy (Help!) 12:04, 11 December 2017 (UTC)
Removed incorrect mention of TOE support in Linux
[edit]I removed an incorrect statement about TOE support in Linux: However kernel network drivers have had TOE support since 2002.[1]
The statement is incorrect because said drivers support partial TCP offload features, not full TCP offload engine. There is a difference between partial TCP offload and TCP offload engine. A partial TCP offload supports only segmentation on the send side or reassembly on the receive side and the full operating system TCP stack is still used. A TCP offload engine (TOE) replaces parts of the operating system TCP stack with silicon. Linux will never, ever support TOE: https://wiki.linuxfoundation.org/networking/toe 88.112.67.108 (talk) 13:33, 24 July 2017 (UTC)
Large receive offload and Large send offload merge proposal
[edit]These are functions of the TOE and can be handled as subsections in TCP_offload_engine#Types_of_TCP/IP_offload. ~Kvng (talk) 15:51, 21 September 2020 (UTC)
- Merger complete. Klbrain (talk) 09:27, 26 September 2021 (UTC)
- C-Class Computing articles
- Low-importance Computing articles
- C-Class Computer networking articles
- Mid-importance Computer networking articles
- C-Class Computer networking articles of Mid-importance
- All Computer networking articles
- C-Class Computer hardware articles
- Mid-importance Computer hardware articles
- C-Class Computer hardware articles of Mid-importance
- All Computing articles