Talk:RSX Reality Synthesizer
RSX specs
[edit]This is the true spec below. so if any one wants to talk let talk about the spec
- 550 MHz G70 based GPU on 90 nm process [1]
- 300+ million transistors ( 600 million with Cell CPU ) [2]
- Multi-way programmable parallel floating-point shader pipelines
- Independent pixel/vertex shader architecture
- 24 parallel pixel pipelines
- 5 shader ALU operations per pipeline per cycle ( 2 vector4 and 2 scalar (dual/co-issue) and fog ALU )
- 27 FLOPS per pipeline per cycle
- 8 parallel vertex pipelines
- 2 shader ALU operations per pipeline per cycle ( 1 vector4 and 1 scalar, dual issued )
- 10 FLOPS per pipeline per cycle
- Maximum vertex count: 7.5 billion vertices per second (24 pixel shaders x 8 vertex x 550 MHz / 4 )
- Maximum shader operations: 136 shader operations per second
- Announced: 1.8 TFLOPS (trillion floating point operations per second) ( 2 TFLOPS overall performance )[3]
- 24 texture filtering units (TF) and 8 texture Addressing unit (TA)
- 8 Render Output units
- Maximum pixel fillrate: 4.4 gigapixel per second ( 8 ROPs x 550 MHz )
- Maximum Z sample rate: 8 gigasamples per second ( 2 Z samples x 8 ROPs x 550 MHz )
- Maximum anti-aliasing sample rate: 8 gigasamples per second ( 2 AA samples x 8 ROPs x 550 MHz )
- 100 billion shader operations per second [4]
- Maximum Dot product operations: 51 billion per second [5]
- 128-bit pixel precision offers rendering of scenes with high dynamic range rendering ( HDR)
- 256 MiB GDDR3 RAM at 700 MHz [6]
- 128-bit memory bus width
- 22.4 GiB/s read and write bandwidth
- Cell FlexIO bus interface
- Support for OpenGL ES 2.0
- Support for DirectX 10
- Support for S3TC texture compression [1]
--
Those are not the true specs of the RSX. The true specs are here -> http://world.altavista.com/babelfish/trurl_pagecontent?lp=ja_en&url=http%3A%2F%2Fwww.watch.impress.co.jp%2Fgame%2Fdocs%2F20060925%2F3d_tgs.htm
Please read the article. I know it may be hard because it is translated from Japanese, but the mistakes that you have pasted from a trade show article that is over two years old are glaringly obvious. The RSX was DOWNGRADED. It is only 500MHz, and several other things changed as well. Aleksael 20:52, 1 August 2007 (UTC)
- No, the specs you posted are not the actual specs. The site that I linked to verifies it. As I already stated, here on this discussion page as well as at the main article, those specs are from the SONY PRESS RELEASE which was given at the TOKYO GAME SHOW in SEPTEMBER 2006. If you are keeping up with the timeline here, the specs that you and others constantly upgrade to were from a CES press release made WAY BACK IN EARLY 2005. You are also dead wrong about the RSX comparison to the 8800. The 8800 destroys the RSX completely and utterly, not even a GTX, a 320MB GTS will still stomp the living daylights out of any benchmark that you can provide. Sony OVERPROMISED on the RSX and couldn't deliver. Nvidia has never stated anywhere that the RSX is more powerful than the 8800 either, they only ever made comparisons to it outperforming two 6800 Ultras in SLI. A 7800GTX is even faster than dual 6800's in SLI OR the RSX, because the RSX has half the bit depth to the pipelines at only 128. What you have here is a custom GPU that was cheapened from a 7800GS, and the 7800GS is already the low-end chip. Put it to you this way, the 7800 die when it was new cost as much as half of the PS3's ultimate $800+ cost estimate. With BluRay also expensive, and the Cell itself gobbling up the rest of the development costs, do you really honestly believe that the RSX could have been that high up on the food chain? This is a commodity chip we are dealing with here.Aleksael 02:47, 2 August 2007 (UTC)
You should sign posts, so people don't get confused-----------------------------------------------------------------------------------------------------------------------
You're taking their flops rating at face value, which counts every single flop they could find on the chip, and comparing it to the "programmable flops" in pixel and vertex shaders of other pc gpus. 1.8 tflops, is a misleading, pumped up number. This slide: http://www.watch.impress.co.jp/game/docs/20060329/3dps309.htm (Sony's own, from gdc) shows what I mean. As we all know (assume), RSX has 24 shader "pipes", two alus each, each alu is capable of 4 flop madds. (vector4, vector3+scaler, vector2+vector2, etc..)
And since madd=multiply + add. (a 2 flop op) So, each alu is capable of 8 flops per clock. So, 24 pixel shader "pipes", would yield: 24x2=48x(4x2)= 384 flops per clock. (192 gflops per second, at 500mhz)
(For that slide) they don't add in the flops contributed by the free fp16 normalize operation, nor add the fog alu, nor the special function alus. (Some of which may have been there for fixed function pipelines to begin with) Or any other automated function that just happens to involve flops (there are lots).
Nor, (suspiciously) do they add the flops from vertex shaders to that number. (would be 80 flops per clock from those, if there were 8)
Which are listed on a separate slide. (I assume) (There may only be 4 vertex shader alus btw, but no (official) linkable proof...yet)
(464 flops = 232 gflops total programmable from both pixel and vertex shaders, assuming 500mhz, and 8 vertex shaders) Swapnil 404 08:50, 5 August 2007 (UTC)
Oh, and even if you give it credit for the fp16 flops, and the special function alus, and add the flops the supposed 8 vertex shaders, you're still only looking at 728 programmable flops per clock. Or, 364 gflops, at 500 mhz. And that's being generous. Swapnil 404 08:50, 5 August 2007 (UTC)
Also, it doesn't support DX10.Swapnil 404 (talk) 08:34, 11 September 2009 (UTC)
Just thought I'd add, there's no such thing as a 90nm G70, it should be changed to G71. The G70 is 110nm, and the G71 is the same chip but at the 90nm manufacturing process. —Preceding unsigned comment added by 81.86.112.14 (talk) 17:23, 27 September 2007 (UTC)
This is kind of a random question (not really), but it has to do with the announced release of Tekken 6: Bloodline Rebellion on the PS3 and how people were saying it wouldn't be able to run as well as the arcade version. Does this have anything to do with the PS3's GPU? ('cause that's pretty much the worst part of it from what I can see) Or is it something else? And if it is, then that just sucks and I think Sony should upgrade the GPU (probably won't happen, but one can dream, right?). —Preceding unsigned comment added by 71.103.44.180 (talk) 18:50, 13 October 2008 (UTC)
References
- ^ Gantayat, Anoop (2006-01-30). "New PS3 tools". IGN.com. Retrieved 2006-08-28.
{{cite web}}
: Check date values in:|date=
(help)
shader operations?
[edit]90.8 billion shader operations per second [( (24 Pixel Shader ALUs X 5.6667 shader operations per cycle X 550 MHz) + (8 Vertex ALUs X 4 shader operations per cycle X 500 MHz)) X (10^5)] — Preceding unsigned comment added by 67.64.116.8 (talk) 19:42, 13 June 2011 (UTC)
Downgrade
[edit]Is it real or not? 550mhz to 500mhz.
--
I restored the previous revision that had the RSX at 550mhz, as that is the last official word anyone has gotten out of Sony. If anyone has an official statement or concrete evidence (i.e., not random forum posters) to the contrary, that should be provided in this article, I think. Jonabbey 07:51, 5 January 2007 (UTC)
--
Could you please cite this last official word out of Sony? Where is it? I can't find it anywhere. On the other hand I have provided a reference to the Tokyo Game Show press release information in the Japanese link which clearly cites 500MHz. If you have anything newer please share, otherwise I think this is pretty official. Aleksael 17:24, 22 March 2007 (UTC)
--
The PS3 review at CNET stated: "Paired with PlayStation 3's RSX Reality Synthesizer graphics-processing unit, a gargantuan 550MHz, 300-million-transistor graphics chip..." [7]--KingEmperor24 05:33, 18 April 2007 (UTC)
--
The trouble with CNET's review is that it reflects exactly Sony's earlier specifications, and Sony's announcement at TGS/September 06 differ and are lesser. In other words the specs were downgraded from the earlier, more optimistic announcement, but that announcement was more publicized and this is why CNET is going from it. Sony would not have been able to change the specs again only two months before the launch. Aleksael 19:30, 13 May 2007 (UTC)
--
This thread ([8]) from ps2dev has some proof of the 500MHz downclock. I would say it is credible because it was not posted in a discussion about RSX specs, so the chances of someone editing the output just for proving his point are slim. To add to that, the same output is reported by a second user in the next post. —Preceding unsigned comment added by 217.120.230.112 (talk) 12:58, 13 April 2008 (UTC)
--
If my memory servers me well, at the very beginning the RSX specs contained 32 pixel (pipelines or shader units??), later downgraded to 24, right? —Preceding unsigned comment added by 201.81.199.213 (talk) 11:20, 28 April 2008 (UTC)
Any extra info?
[edit]Does anyone out there have any further details for this chip? I was thinking specifically of:
Pictures of the package item/picture of actual 'silicon'
Details of the Instruction Set Architecture
Infomation concerning any additional features in comparison with a standard Nvidia chip
Details of specific methods used for hardware acceleration e.g. exactly what operations are parallelised
Range and accuracy of point (vertix) storage format
Details of hardware support for vector operations (rotation, 3d to 2d transform, ray surface intersections etc)
Hardware support for rasterisation (ie 3d to 2d transform) vs. support for ray casting/tracing
Degree of autonomy of the processor (ie is it autonomous or is it 'fed' by another processor eg cell as was the case in the PS2 (Graphics synthesizer 'fed' by Emotion engine.)
Is the architecture 'turing complete'?
Is the chip also responsible for ouputing the digital HDMI / analogue (comp/vga?) signal itself or does another chip do this?
If you can supply references (web links) to any of the above info. (or related) you can assume that I volunteer to incorporate a summary into the RSX article.(Just leave a link on the talk page) Or add the infomation yourself (Even Better!) Thank you.HappyVR 05:53, 25 February 2006 (UTC)
- I don't think much else regarding that information has been released! The only thing that I have to add to the article is the rumor that the RSX is not actually based on NV47 architecture but rather is more geared towards the G80 (or rather a modified 7900GT setup that contains elements of G80). The rumor started because of how Sony has been completely mum on the RSX and we know nothing else about it except what is outlined in this article (which is why I think you won't get the information you're looking for)!
- There will either be an announcement regarding the status of the RSX (whether it is indeed much more powerful than Sony has let on) or the PS3 will be released and we'll find out what's actually in there after that. StealthHit06 22:44, 4 August 2006 (UTC)
- I found an image of the RSX (website source on the image page), yet I do not know what license to give it. So unless one is given or figured out, it'll get deleted automatically. XenoL-Type 15:21, 25 January 2007 (UTC)
As StealthHit06 said, there's no chance somebody will leak this because they're likely under NDA. We can only speculate the instruction set and features to be similar to G80. Rasterization is obviously accelerated. From what I remember the definition of "turing complete" I would say everything that is barely programmable is "turing complete". You will never find this proven because no one in business really cares about this. BTW, rasterization isn't a 3d->2d transform. 85.18.201.168 09:26, 12 February 2007 (UTC)
Speculating the instruction set and features of the RSX to be similar to the G80 would be like speculating the instruction set and features of the Radeon 9800 to be similar to the X1950. Nvidia has given press events with Sony likening the chip to the G70. Aleksael 17:35, 22 March 2007 (UTC)
--
Concrete information has been given on this but it is being repeately deleted from the references section, along with the very pertinent information that it provides, like the 90 nm process for instance. You would think a war is going on here with the way this article is being edited. Aleksael 17:27, 22 March 2007 (UTC)
Vertices per second
[edit]Where are these measurements coming from? StealthHit06 06:57, 24 December 2006 (UTC)
The number is totally useless. This depends on triangle layout, post-T&L-transform cache optimization and vertex shader. It says absolutely nothing at all. 85.18.201.168 08:57, 9 February 2007 (UTC)
The number is NOT totally useless. And you didn't answer his question, which is, where did the figures come from. Here is where the number is coming from. First please look here, nvidia states nothing more on their own company website than that the RSX has 300 million transistors. For the rest of it you have to look at the presentation that took place on May 16, 2006--information of which they no longer have on their site and link to Gamespot for, here. Here you find it stated that the "RSX is more powerful than two GeForce 6800 Ultra video cards," presumably when used in SLI mode. Now if you look up the specs on the 6800 Ultra in SLI you will find a pixel fill rate of 12.8 gigapixels, a texel fill rate of 12.8 gigatexels, and a geometry rate of 1.2 billion vertices per second. Here is Nvidia's current 6800 Ultra information page, for SLI mode I'd have to look elsewhere but this jives with a single card and other factors they give there which differ are due to the last 6800 being upgraded over time, the specs still match for what they were when the RSX was created and the press events happened. There are several other verifiable statements from Nvidia that RSX was to be more powerful than two 6800 Ultra cards in SLI mode, you would have to ask them why they aren't giving this information on their own page now but this does not make it any less legitimate information, and the way they presented it to us with this sort of comparison is not out of line. Aleksael 18:25, 22 March 2007 (UTC)
It had been stated at one time, that the vertex set-up engine of RSX, was limited to 250 million vertices per second. (one every two clock cycles at a 500mhz clock frequency)
That's the "approximately" the maximum number the gpu could ever actually draw. (this may be listed as the limitation of other Nvidia gpus "closely related" to RSX, and if so, it could probably be verified through the documentation of those gpus, considering official RSX specs are harder to come by.(Sony NDA)
Still, not sure if citing documentation of a G7x, would be enough of a reference. (at least in official capacity)
I do recall a site "leaking" this as part of list of things that were supposedly "broken" about PS3. (most of it misinterpreted, but did come from actual documents)
Of course, "set-up" occurs after occlusion and backface culling, etc., So more geometry could still be"processed". You could say it can "calculate" more, just not ever display them. But really, the average number of vertex shader instructions would be greater than just matrix * vector.-(basic position transformation takes 4 clock cycles for 1 vertex shader alu to complete, hence the 1.1 billion figure) And culling and the like should eliminate a large percentage of them every frame.
So there's really no point in having hardware capable of setting up the maximum number of vertices of that type, in anything but a benchmark test,
-Set-up rate, is the figure given in the white papers related to Xbox360's Xenos, as "500 million".-(250 million when tessellation unit is used, i.e. the vertices would have to be processed in two passes but with fully adaptive level of detail adjustments)
Which is a more reasonable figure, than just listing theoretical maximums based on processing nothing but basic polygon transforms it could probably not actually do.
I guess it could be said that RSX has a theoretical maximum of something like ~40 billion programmable shader flops devoted to vertex processing. (i.e 8 vertex alus, each capable of 4 floting point madd operations and 1 mini alu capable of 1 flop madd opeeration,(for special function, or scaler, or whatever) (at least I think the mini-alu is madd capable)
(madd=multiply+add=2flops)
8*(5*2)=80*.500=40 gigaflops per second.
But finding an official document besides leaked RSX pdf docs or stuff from the leaked sdk by hackers and the like, is unlikely any time soon. So not much "official" sources to go by, for expanding this article. Swapnil 404 16:14, 22 July 2007 (UTC)
Really, I've never actually heard Sony themselves officially give figures on how many vertex shaders there are in RSX, as none of their official press slides give a specific number, just a number for textures, etc. (which has always struck me as odd) Leaving some sites to ad-lib figures from elsewhere, or make educated guesses on their own, based on G70 specifications. The 136 shader ops number, seems to jive with it, but those are also old) (But I guess it's never been stated otherwise, neither official nor unofficial. Plus, I noticed the TGS links someone mentioned)
There are presumed to be 8, as there are in G70/G71. But even then, this is a customized console gpu. So, while in the pc world, a gpu with a defective vertex shader alu, could be packaged as a different scew, (like 7800gt, as opposed to gtx), thus decreasing wasted production, a console gpu wouldn't seem to have many secondary uses, and the game isn't expected to be made to scale to different specs. So leaving one for redundancy to improve yield %) (as they deided to do with Cell) probably isn't out of the question. Especially given the fact that vertex shading isn't usually the bottleneck, and the fact that there are lots of things spe's are expected to do, for reducing the load on vertex processors. (like the pre-culling on Cell, sometimes required for decent performance as it is) (lots of other things besides that though)
Anyway, most of the stuff I wrote here is a pretty pointless collection of hearsay, unless there's an official source found, but I have nothing better to do at the moment, even though I have no intention of editing anything. Swapnil 404 16:14, 22 July 2007 (UTC)
But I can point out that where it says, "Maximum shader operations: 136 shader operations per second", I assume is actually meant to be 136 "per clock". Really just a typo I'm sure, as it mentions 136 related to G70 later in the article.
(24 pixel shader "pipelines") (each with 2 shader alus in each pipeline) (each capable of a vectorx and a scaler operation) (1 being connected to a floating point texture processor that computes an fp16 normalize operation) + (8 vertex shader alus) (each capable of a vector4 and a scaler)
24*2*2=96+24=120+(8*2)=136 ops per cycle. Or something like that. Swapnil 404 16:14, 22 July 2007 (UTC)
Floating point performance
[edit]Also the theoretical floating point performance of 1,8 terraflops seems insanely high to me. As far as I know, this number comes from aprox. 2 years old (by early 2007) PR material claiming that the whole game console has floating point performance of 2 terraflops. In my opininon, these numbers are totally incorrect. Note that the GeForce 8800 GTX with G80 GPU is believed to have floating point performance of 500 gigaflops, the previous generation 7900GTX part - on which RSX is based - has 250 gigaflops.
On this ground (well, it's more like basic logic), I removed the 1,8 Terraflops floating point performance line.
8.3. 2007: I see the number is coming back - unless evidence is provided, it shouldn't be brought back. Removed.
15.3. 2007: Do I have to phrase it again and again? This number just can't realisticaly be true. /Anyway: "Encyclopedic content must be attributable to a reliable source."/
Agreed, removed this figure since I couldn't verify it--it has also "changed" at some point, from 1.8 teraflops to 1 teraflop, either way I could not find any thing confirming this at all. Aleksael 05:24, 27 March 2007 (UTC)
It is true, even IGN has an aricle about it. that the rsx has 1.8 TFLOPS and the ps3 has over 2 TFLOPS of overall pergormance
CUBE152: That's the actual performance of the RSX ( 1.8 TFLOPS) but the ps3 has an over all performance of 2.18 TFLOPS. When i edited Wikipedia i put a link beside it. Check it out at here [9]. The reason Sony made the ps3's CPU AND GPU so powerul is because PS3 will be on the market for 5-7 years and they have to make it strong enough to beat other competitors. So the RSX is stronger than the NVIDIA 8800 Ultra. and even stronger than the NVIDIA GeForce 9 which is over 2x faster than the 8800 Ultra, and the geforce 9 is coming out later in 2007, which will have an over all performance close to 1 TFLOPS Compared to ps3's overall performance it still can't beat the ps3 with 2.18 TFLOPS overall performance. Sony didn't modify and NVIDIA GPU it only made it based on it's architecture which is only the shape of the Chip not the performance of it. Sony added their own power to their own product.--Cube152 16:28, 22 July 2007 (UTC)
You just canť believe marketing stuff - those 1.8 TFLOPS figures were once told to the tech journalist and they just keep repeating them over and over. You canť really call that evidence. The most you could responsibly say on that grounds is "Sony claims 1.8 TFLOPS". Realistically, the RSX just canť have real-world performance surpassing for example the Geforce 8800. Also FLOPS figures MIGHT or MIGHT NOT correspond with real performance. It's possible that the GPU, with some "smart" method of meassurement could be proclaimed 1.8 TFLOPS capable, but then the number would just lose all meaning, as it would put RSX above chips more powerfull. You know, FLOPS figures can be representative of real performance only if it is possible to guess performance difference between various chips from them. If we know the chip is weaker, but has insanely high rating, it is just clear that the figure is confusing, and thus wrong. —Preceding unsigned comment added by 195.113.65.9 (talk) 14:37, 24 October 2007 (UTC)
600 mil second?
[edit]where did you get your info that the RSX can produce over 600 mil polygons a second? Not even the ps3 fanboys at gamespot make this claim. The 360 can produce almost twice as many polygons a second something even ps3 fanboys. —The preceding unsigned comment was added by 71.134.225.134 (talk) 03:26, 10 February 2007 (UTC).
---
That's understandable, this article has been repeatedly edited, removing information and inserting spurious or unverifiable information, and ultimately removing real references from Tokyo Game Show's 2006 press release. It would seem there are a lot of Sony fans who do not want anything to cast this page in a negative light. Aleksael 17:29, 22 March 2007 (UTC)
We gotta get this right
[edit]where you get the xbox 360 can make twice polygons than the ps3?
just look killzone 2 it have 400,000 polygons runing
Spellings
[edit]Vetrex? I'm all for comparison against retro consoles, but this is silly. It's spelt "Vertex". There are probably other spelling mistakes there too, but since this is likely to be overhauled, I didn't check. I would have edited this myself, but it's currently protected. Hinges 15:39, 22 April 2007 (UTC)
Article nonsense
[edit]Isn't it true that all the specs in this article are based on hearsay.. The problem is, this article presents it as fact.. For example, DirectX 9c support!!! (Since when has the PS3 used DirectX APi?) —Preceding unsigned comment added by Mgillespie (talk • contribs) 14:29, 11 May 2007 (UTC)
- No, the specs in this article are not based on hearsay. Hit the link to the Tokyo Game Show from September 2006. Only two months before the console's release, the specifications were done and cast in stone, few reviews have reflected this however and gone by earlier press releases that were overly optimistic instead, and so the real GPU speed is 500MHz, not 550. (i.e. those original specs were downgraded.) Additionally, the RSX is G70 based. That chip supports DirectX 9. It does not matter if Sony chooses not to develop their software with that API or not. Aleksael 04:03, 12 May 2007 (UTC)
- Saying it supports DirectX 9 is misleading, a better wording would be that it "could support at least the DirectX9", but even that would still be misleading, as supporting DirectX 9 is just as much an issue of drivers as it is of hardware (cf. DirectX 9 intel integrated chips). DirectX9 is an API, not a feature set (contrary to the urban myth). 83.159.9.78 19:50, 3 June 2007 (UTC)
- No, saying it supports DirectX 9 is not misleading. You are very correct, DirectX 9 is an API, and that is precisely why the RSX/G70 supports it. It isn't an urban myth to mention it here when the die that the RSX was built from was designed to work with its features. Just because Sony chooses not to use the API in their development environment is irrelevant to the functionality of the chip. If it were so wrong to mention the DirectX 3D set which the chip can run it would be just as wrong to mention its OpenGL ability. The only thing that you can really question here is relevancy as the development environment from Sony works from uses OpenGL and not DirectX. This does not have the same kind of bearing on the engineering of the chip however, which is perfectly capable of both APIs.Aleksael 00:48, 7 June 2007 (UTC)
- Saying it supports DirectX 9 is misleading, a better wording would be that it "could support at least the DirectX9", but even that would still be misleading, as supporting DirectX 9 is just as much an issue of drivers as it is of hardware (cf. DirectX 9 intel integrated chips). DirectX9 is an API, not a feature set (contrary to the urban myth). 83.159.9.78 19:50, 3 June 2007 (UTC)
nVIDA templates
[edit]Might want to chuck this on here for continuity: {{NVIDIA}}
- Ok, I've added it. -- Bovineone 23:29, 1 August 2007 (UTC)
Vertex
[edit]This page currently includes a link (within the text "Maximum vertex count: 1 billion vertices per second (8 vertex x 500 MHz / 4)") to vertex, a disambiguation page. Can someone with the ability to do so please change it to vertex (geometry)? Thanks. —David Eppstein 21:22, 19 May 2007 (UTC)
- Done. Cheers. --MZMcBride 21:41, 19 May 2007 (UTC)
- Thanks! —David Eppstein 02:27, 20 May 2007 (UTC)
Soundstorm
[edit]Shouldn't it be mentioned that the RSX also handles sound related tasks for the PS3?
- I googled for this and found little concrete information. What was there was pretty far back and I'm not even sure how official it was. (i.e. Will RSX be the Soundstorm 2?) Would be an interesting addition if you have anything specific though? Aleksael 03:11, 22 June 2007 (UTC)
Tokyo Game Show Article
[edit]Okay, could I hear from anyone interested in this article, why does this reference keep getting deleted?
http://www.watch.impress.co.jp/game/docs/20060925/3d_tgs.htm
English translation at altavista here.
Near as I can tell it's because it is the only article that shows lower specifications for the RSX--but it is also the newest, most up to date, and last official report from Sony on the specifications for this chip. Seriously people, I know that every other article in existence says the clock rate is 550MHz. I know that every other article out there says the RAM speed is 700MHz. I know that the anandtech article you keep putting in the place of this newer one direct from the Sony presentation at the Tokyo Game Show has higher specs. But if you actually take the time to READ this article, through the Translator if you don't speak Japanese, you can clearly see that the article states the higher specs were DOWNGRADED before the release. The Tokyo Game Show was in September 2006. The PS3 debuted less than two months later. Do you really think that Sony would tell everyone the video chip was downgraded with these specs, then reverse their whole operation and pump them back up again? The old information that keeps getting pumped on this page is INCORRECT. Aleksael 13:35, 26 July 2007 (UTC)
- I agree. Looks pretty official. I have seen it mentioned a few times, that it was reduced, but they may have been citing that article too. I would have to look. But considering NDA's being the way they are, I guess wikipedia articles will be limited to marketing PR sometimes, when it comes to things like this. Unless, some site tests the connections for clock frequency themselves. (not unheard of) I wonder how wikipedia would have handled this, if Sony hadn't bothered to give any numbers, outside of the 1.8 teraflops, and 2x 6800 comments. You could find developers who have said what they are working with (in passing), and other such sources, but that wouldn't be good enough for most it seems.
- Note, there is about as much proof and evidence (or more) of the lowered clock frequency, as there is for "8 rops", as I don't think that ever came directly from Sony either. NOr did they list fill-rate.
- And they never technically said how many vertex shaders there were either as far as I have ever heard. There's very little "official" evidence of their number, outside of deconstructing their other math figures such as dot products.
- Numbers based on "being related to a G70" (NV47 technically) is still speculation without proof, (considering GTX, GT, etc. versions haved different functional numbers, but are still considered 7800s) and not official word from Sony themselves. But this article would be pretty vague going by that criteria.
Swapnil 404 05:14, 11 August 2007 (UTC)
Two points in which I will bother with.
[edit]1. RSX does not, in any way support DirectX 10. Come on people, that is well understood, by anyone with even a slight interest in the subject matter. Using Cell, could "in thory", allow for some of the gpu functions covered by the DX10 API, and perhaps even some that are not (educated guess) but RSX is not geared for such things on it's own. RSX has never, nor will it ever, be claimed to be DX10 compliant. Not by Sony, Nvidia, or anyone else.
"Perhaps" it could be listed, (if people insist on adding an api spec at all) it could be considered DX9+, in the same way NV2A was DX8.1. Considering, there are almost always features of gpus that are not exposed by DirectX, but would have no reason to be overlooked in a closed box such as PS3. I can dig up some of the extension lists if anyone wanted them. I believe them to be publicly available.
2. It is not 136 shader operations "per second". It is meant as "per clock cycle". Think about that for a second, and you will understand that it is a typo, and that 136 "per second" would make zero sense. (And it is even correctly cited as such, farther down, in reference to G70)
Those are the only two points I have ever bothered to edit, and they continue to be reverted back to the previous specification.
I personally will not bother with anything else I could not give a credible link for. Leaked developer documents don't count, nor do comments made by developers in forums, nor does pure speculation. Such as the clock frequency change to 500 mhz, which I believe has been well known for a while now, but since I have not bothered to research it to give an official credible source, I have never edited it. That has been brought up here by others, not me.
I can point out, that (while it is interesting as a thought experiment), the polygon and vertex section, (best case, worst case, etc) is entirely ad-lib, and doesn't reflect an actual game of any sort, nor would RSX be capable of those figures, even in benchmark tests, for a laundry list of reasons. But I won't personally bother with those either.
And 1.8 teraflops is a pumped up, misleading marketing figure, probably not worth mentioning at all without proper context. It leads people to compare it, to the legit floating point ratings commonly given for other gpus, as they typically relate to programmable flops. It just mis-leads people like the few on this talk page to think that the RSX is magically "more powerful than even an 8800". But I will not bother with such things. That will be for others to decide, and debate.
Keep in mind, this is not meant to sound rude, or condescending in any way. It's just what I notice is wrong with this article. And those two points are incorrect, and should be changed. Swapnil 404 14:28, 9 August 2007 (UTC)
About the 500Mhz downgrade
[edit]I pulled this up from Answers.com http://www.watch.impress.co.jp/game/docs/20060925/3d_tgs.htm. The article was posted about TGS, and on September 2006, and I believe Game Watch is a reputable source (correct me if I'm wrong). Plus it would kind of make sense since Sony has been cutting down on the original PS3 hardware spec in order to increase yield percentage and lower costs. -- XenoL-Type 20:33 27 August, 2007 (UTC)
How odd - that page says "core clock 550MHz" in the text, then "core clock 500MHz" in the 7800GTX/RSX comparison table. Seeing as the page includes two contradicting statements, I would say that it cannot be used as a reference to the core clock speed. Does that make sense?Oh, there's the text about the downgrade - but it looks as if Sony/NVidia didn't officially release this information, and it was leaked. I haven't looked all the way through Wikipedia's rules, but I suspect that if something's said to have been leaked, then it can't be trusted, and therefore can't be used as a reference. Someone should go badger Sony/NVidia to stick up a page on their site! Cam.turn (talk) 05:51, 17 November 2007 (UTC)
- Leaked? What in the world makes you think that info is a leak? It was from the press release at the Tokyo Game Show September 2006. Sony and nvidia made this statement themselves.Aleksael (talk) 19:02, 13 December 2007 (UTC)
- Technically speaking, the majority of this article is speculation at best.
Very little of it can be verified by something Sony or Nvidia released. Some can be assumed based on specifications of G70, etc.. But things like the polygon figures is guesswork (and wrong afaik-no offense to whomever wrote it). And the dot product and shader ops per second numbers, includes figures of both RSX and Cell added together. This is a page about just RSX, not really PS3. Swapnil 404 16:16, 3 December 2007 (UTC)
6800GS Anybody?
[edit]Since when have RSX's pipelines been halved for better yield? Apparently 89.232.22.90 decided to do a bit of semi-believable vandalism. I am removing the 12 pixel and 4 vertex removed for yield parenthesis. Feel free to re-add the MIMD array descriptions if they are correct, I wasn't sure if they were plus they weren't originally there anyway. Best regards! VastFluorescence (talk) 03:10, 26 January 2008 (UTC)
It wasn't vandalism--for starters he corrected the cycle figure which should only be 500MHz. Lots of specs about the RSX were downgraded after the earlier, higher announced specs were made. Please look at the link to the Tokyo Game Show article at gamewatch. It's been all over this discussion page. You can't miss it. Translate it at google or altavista and you will easily see lots of figures that are lower than the article is being continually upgraded to. I'm pretty much done with fighting. The Sony fanboys have their say here and they will edit any factual information that comes in on this page with old and outdated specs. Aleksael (talk) 16:03, 13 February 2008 (UTC)
As far as I can tell, the Game Watch article (reference 2 on the hardware page) states that the shader clock is 550 mhz with 24 pipelines. I do realize the rest of the chip could be clocked at 500mhz and thats even if the slides are correct, which sounds like is being disputed. I wasn't trying to agitate anyone by editing the page and I do apologize if my corrections were wrong. I am not particularly knowledgeable on this subject, I admit, as I am one more for PC graphics. It seemed "24 pixel pipelines (12 disabled for yield)" was vandalism to and once again I appologize for agitating what must be a most frustrating battle with fanboys. VastFluorescence (talk) 01:58, 16 February 2008 (UTC)
- No, the gamewatch article states this: "the specifications of RSX which is released in 2005, the point which was lowered. Core clock 550MHz, video memory 700MHz it is RSX of PS3 which is published at the time 2005, but with E3 2006 the う っ て changing, that it became "secret", with the final sale model core clock 500MHz, was lowered to video memory 650MHz it has been transmitted." In other words (and in better English), the original specs published in 2005 were 550MHz, the finalized specs in September 2006 were 500MHz. It is because of this that this source is repeatedly deleted here, and it doesn't help that this is the only source. But c'mon, it was at the TGS. It doesn't get any more offical than that. You are correct about the pipelines, the chart shows those are still 24, but the bit depth is halved (128-bit instead of 256) and other downgrades that nobody seems to want to acknowledge here.Aleksael (talk) 22:59, 19 February 2008 (UTC)
- ----Technically, those are the same specs you've always seen, outside of the lowered clock frequencies. "Bit depth" is just referring to the bus connection to gddr3, which affects bandwidth to video ram. They also have their flexio bus to Cell's memory controller, which gives it access to a percentage of XDR bandwidth, etc.. Rops was reduced to 8, because the majority of the time, the frame buffer resides in Gddr3, and with only 20+ gbs of bandwidth to gddr, they couldn't support 16 rops for much anyway. That also affects fill-rate, etc..
- But those are already a accounted for in the article. There are no down-grades in that gamewattch article, that aren't already listed, aside from clock frequency. (and "perhaps" a redundant vertex shader, but there's no mention of that in even the watch impress article)
- Anecdotal evidence of clock changes =
- http://forums.ps2dev.org/viewtopic.php?p=62016&sid=8eafc7661d09f6e278ecf2f37966f939
- If you look at the system profile, from the code dump in the second "code" post, you'll see "500 mhz core clock" and "650 mhz ram" in it.
- Not enough evidence on its own at all, but it does back what was already said by watchimpress, a respected website, that does interviews with developers, attends trade shows, etc..
- Anyway, the wiki article still lists "100 billion shader ops" and 51 dot products, etc.. which have Cell figures added in, despite this being an RSX article, not PS3 as a whole.
- And the polygon figure is still false, for a list of reasons. RSX couldn't do half what they have listed for polygons. And it's not really "based" on a G71. Sure, G71 is on the same manufacturing process, and technically, is just a reworked, die shrunk G70, but it's likely that G71 is based on the work done for RSX, rather than being the other way around. That's why sony only ever says NV47 in their slides. Swapnil 404 (talk) 04:29, 23 February 2008 (UTC)
Or, we could consider those flops ratings from this slide, and consider the flops listed in this wiki article (and elsewhere), which is "27 flops per shader pipe". Which would indicate there'd be only room for 12 pixel shader "pipes" in that flop profile. Coupled with 6 vertex shaders.(vertex shader is 10 flops)
27 x 12 = 324 + (6 x 10) = 384 flops per clock. Then, if we consider additional texture look-up logic, perhaps for FlexI/O, and we might assume something like a 7600. But I think there is enough evidence to still assume cut back 7800/7900.
Btw, clock frequency tells you absolutely nothing about the specifications of the chip. They can clock it anywhere they think they can keep it cool. Just because it's 500 mhz, doesn't make it a 6800gs. It would be stupid to assume that, "because it's 500 and a 6800 is more around 500, it must be a 6800gs". Clock frequency isn't a part of the specifications profile. (Especially on a chip on a smaller manufacturing process) A 90nm 6800gs could easily be clocked faster, than a 110nm. Swapnil 404 (talk) 16:04, 1 March 2008 (UTC)
The whole 6800gs thing was a spin off the vandalism. 89.232.22.90 had reduced RSX's specs to resemble a 6800gs; a card I am familiar with and own. Upon further reading I did see the gddr3 bus of RSX is 128 bits, which I suppose resembles a 7600GT. A funny aside is that my 6800 GS is on a 256 bit bus. All in all, news and specs of the RSX are jumbled and confusing and maybe even partly lost in translation, who knows. My understanding of the flex I/O is that RSX was reworked with a compatible logic for it. I do not know if it can actually share main memory bandwidth so once again I'll state I have no further intention of editing this article. Best Reguards. VastFluorescence (talk) 21:15, 1 March 2008 (UTC)
- From what I've heard, the RSX has functionality similar to Nvidia's turbocache, and has extended cache sizes and registry tweaks specifically for it. And that the cpu's memory controller can take requests from the gpu, in the same way Xbox and Xbox 360's gpu memory controller, caters to the gpu as well as the cpu, all in and out of gddr3. Supposedly, there's a difference in the complexity of something like a pixel shader, when using xdr over gddr3, etc.. Swapnil 404 (talk) 08:12, 28 March 2008 (UTC)
5 ALU ops per cycle?
[edit]I would like to point out that in the discription it says that the pixel piplines can perform 5 ALU ops per cycle, but when it state what operations it can do it says 2 vector4 OR 2 scalar/dual/co-issue and fog ALU, so should we leave or change it, it makes more sense if we say two ALU ops per pipe because a pixel shader is a vector4 op (R,G,B,a)? —Preceding unsigned comment added by 203.12.52.61 (talk) 04:00, 24 March 2008 (UTC)
- Well, it can do vector3+scaler on both alus, with an additional fp16 normalize op. You typically see pixel shader ops as vector3. And doing an additional scaler with that, would be considered 2 operations, since it's not part of the same vector, afaik. Swapnil 404 (talk) 07:54, 27 March 2008 (UTC)
I see, though I thought it could do vector4 ops to get the same ammount of data as a vector3+scalar and thats considered 1 op? —Preceding unsigned comment added by 203.12.52.10 (talk) 09:54, 27 March 2008 (UTC)
- Well, same amount of data, as in, the same number of flops perhaps. But an operation is a collection of flops, they don't have to be a specific number of them. One alu doing vector2+vector2 is 2 operations also. Or 2 scaler, etc. Like Xenos's "vector4+scaler" is two ops. But if a vector2 op came up, it can only process that one vector2 with a scaler that cycle.Swapnil 404 (talk) 09:43, 28 March 2008 (UTC)
Should we also make mention that I can only do a maximum of 16 flops per cycle (MADD alus), or should we just wright 8 (not considering madd)? —Preceding unsigned comment added by Gears, Gears, Gears (talk • contribs) 01:29, 15 June 2008 (UTC)
- One of the things they changed between 6800 and 7800, was upgrade the second alu to MADD, from being simply ADD. So just saying 8, would ignore that fact. However, I doubt it'd really fit in the article at all, also considering it isn't "verified" by Sony in a specs sheet. Swapnil 404 (talk) 11:00, 20 October 2008 (UTC)
Direct X 10 support & Sourcing
[edit]Removed the line on Direct X 10 support. There is no reference for this and it is a complete fabrication. In fact most of this article is unreferenced or comes from one source - Sony's September 2006 press release. That was nearly 2 years ago! Come on! I've also added the Primary Sources tag since this article relies on information provided by Sony, which is not sufficient. Kristmace (talk) 09:04, 15 July 2008 (UTC)
- I know there isn't a lot of hard info on the RSX. But if you will bother to notice, the majority of the specs from the article were pumped up from an anandtech article on a Sony presentation in the summer of 2005, and these inflated figures were then inserted into the press release from TGS September 2006. No offence to Swapinil and others for their good information to this thread, but every time I come looking at this page, everything is inflated. You are witnessing a console war and the kids are using Wikipedia as the battleground.Aleksael (talk) 19:11, 25 August 2008 (UTC)
I agree that the clock rate is likely 500mhz, (and ram 650mhz) as that's also what the system profile lists to folks fiddling with it have found. And the "dot products" is based on a slide implying "Cell+RSX". The floating point numbers are pumped up marketing figures, based on all potential flops on the gpu, whereas most other gpus simply list programmable shader performance. However, I think folks are assuming that everything else is simply derived from "2x 6800", and the TGS article contradicted it. It's really not. It's derived from basic math, and reference to 7800s. (whether that's a valid technique is up to whomever cares)
For example: The "1.1 billion vertices per second" figure, wasn't derived from 6800's in SLI.
2x 6800 SLI = 1.2 billion vertices per second. Why? Because 6800's had 6 vertex shaders. The simplest vertex transform takes a single vs 4 clock cycles to complete. So: Each vertex shader = 1/4 the clock rate. 6800's were clocked at 400mhz. So each vertex shader provides a theoretical maximum of 100 million vertex transforms per second. 100 million x 6 = 600 million. So, 2 6800's is 1.2 billion.
RSX supposedly has 8 vertex shaders, (like a 7800/7900). So, if each vs is the typical 1/4 the clock rate, at 550 mhz, each can transform 137.5 million per second. If there are 8, it's 1.1 billion.
That's what they use on every gpu that lists polygon transform rate. (excluding Xenos) It's simple divide and multiply, not real world testing and it ignores a whole list of other factors. (that's why I pointed out that the "worst case" "best case" polygon figures that used to be in this article, based on shared vertices and whatnot, was pretty pointless, and the gpu couldn't actually set-up what they had as "worst case")
Anyway, for fill rate. RSX has 8"rops", so it's fill rate is 8 x .550 million. Hence, 4.4 billion. Nvidia's could do 2x if there's only a z-value, so it's 8.8 billion z-only. 24 texturing units, (found on a Sony developer presentation slide, so it's 13.2 billion bi-linear filtered texture samples per second. etc, etc.. Bus to main ram is 128-bit, 1.4 ghz bus interface (it's ddr), to 700 mhz gddr3 chips. So, 128-bit x 1.4 ghz = 22.4 gigabytes per second bandwidth, etc..
Personally, I didn't write any of the article, and don't really edit, outside of removing the DX10 stuff, and alter the fill rate back to 4.4, from whomever keeps putting it at 6.8. But the reductions listed in the TGS article you linked, is already accounted for, outside of the clock rate itself. (which of course, would affect the other figures as well) But having half the bus-width, and half the rops of a 7800 is in there.
Overall, these types of articles are gonna be worthless, if we rely on Sony, Nintendo, or Microsoft themselves. They don't even have to provide anything on specs, so PS4 will simply be: "Awesome" (reference Sony.com), "Truly next gen" (reference Square-Enix), "it makes the PS3 look like a last-gen console" (source EA), etc.. Swapnil 404 (talk) 03:42, 21 October 2008 (UTC)
AA
[edit]Someone said "please provide a reliable source for edit". But the "17.6 gigasamples" for aa has no source either, and doesn't make much sense at all really.
That figure is assuming the gpu magically boosts its fill-rate 4x to do aa.. It doesn't. It has a maximum fill-rate. 8 rops, "550" mhz= "4.4" gigapixels with a color value, and a z-value. If only z-value, it can do "8.8" z-only pixels.
msaa pixels have a unique z value, and duplicate the color value, that's why "2x is technically free" on 7xxx, thanks to the z-only rate, but 4x takes 2 passes.
Rops have to process aa pixels. That's one of the main reasons you can't just tick 4x msaa on everything and have it run the same.Swapnil 404 (talk) 09:10, 11 September 2009 (UTC)
Edit: Ok, some reference:
From: http://ixbtlabs.com/articles2/video/spravka-g7x.html
"The array of six quad processors is followed by the dispatch unit, which redistributes calculated quads among 16 Z, AA, and blending units (to be more exact, among 4 clusters of 4 units, processing an entire quad - geometric consistency must not be lost, as it's necessary to write and compress color and Z buffer.) Each unit can generate, check, and write two Z values or one Z value and one color value per clock."
"Besides, one such unit executes 2x multisampling "free-of-charge", 4x mode requires two passes through this unit, that is two clocks. Let's sum up features of such units:
* Writing colors — FP32[4], FP16[4], INT8[4] per clock, including MRT. * Comparing and blending colors — FP16[4], INT8[4], FP32 is not supported as a component format * Comparing, generating, and writing the depth value (Z) — all modes; two values per clock in Z-only mode. In MSAA mode — two values per clock as well. * MSAA — INT8[4], not supported for floating point formats."
If that isn't official enough, it is common of how msaa is implemented. You don't see 4xmass everywhere for a reason.
It's 8.8 maximum, but I'll leave it as is, so someone else can mess with it. Swapnil 404 (talk) 07:37, 11 September 2009 (UTC)
- Interesting. But still, what have written here is pure original research, and the "reference" you provided is a) not a reliable source, and b) isn't even about the RSX.
- But, like you said, the "17.6" was uncited either, so I am removing it altogether until a reliable source can be found. 124.186.246.195 (talk) 07:37, 11 September 2009 (UTC)
Then we can pitch most of this article then. The link is just a reference for how rops are set up, and how they process aa pixels. It's the same chip family.(nv47) 7300, 7600, 7800, 7900, etc. all use the same scheme for processing aa. NV47 rop configuration is likely in an official open-to-the-public document. But, no prob if you remove it.
Honestly though, has anyone provided you a source of how many vertex processors there are? Or pixel shader processors? All Sony said was "pixel and vertex". I can show you a Sony slide that says "24 2d texture fetches" but that doesn't specify pixel shaders. Vertex shaders can fetch point sampled textures as well. They don't specify "filtered" afaik. If I went by that, I could assume it had 20 pixel shaders, and 4 vertex shaders. Or 16 pixel, 8 vertex, etc.. They never specify much at all in anything official. Most of these figures are derived from it being nv47 based. (supposedly 7800-7900 gpus) The only source (probably not acceptable) regarding pixel and vertex shaders is from that watch impress article, which also lists the "500mhz / 650mhz" changes mentioned. But yeah, better to delete it totally. I have no problem with that honestly.Swapnil 404 (talk) 08:58, 11 September 2009 (UTC)
- Yeah I think it's a shame console makers don't release complete specifications of their GPUs. 124.186.246.195 (talk) 08:08, 11 September 2009 (UTC)
Dot Products
[edit]The figure listed seems to be from this slide: http://img296.imageshack.us/img296/6577/rsxbandwidth6ae.jpg Which combines numbers from both Cell and RSX. They also list 512mb of graphics memory, 2Tflops, etc, etc... It's pretty obvious it's meant for both.. Dot products would likely just be shader alus. They provide the vector and scaler op. I could list what an estimate is, or we could figure up what Cell would potentially be, since it's pretty open on how many spes, simd, etc.. And subtract from it... but there wouldn't likely be an acceptable link for that. Just like the vast majority of everything else.
Anyway, sure, you can use spe's for graphics processing, as it's designed with that in mind, but this is an RSX article, not PS3 as a whole.
Just sayin'. I won't likely edit it myself yet though. The only thing someone could really do is delete 51 totally. (or amend it as total system performance (Cell + RSX), since that can be linked and verified. Swapnil 404 (talk) 16:54, 17 September 2009 (UTC)
This article...
[edit]Is absolute garbage.. It's been confirmed by many sources that the final clock speed is 500/650 MHz. The fact that we base all of the specs on outdated and old sheets just goes to show really. All of these specs are inaccurate or just completely wrong. You might as well just friggen put that the PS3 still has 2 HDMI ports............. — Preceding unsigned comment added by MegaCadet (talk • contribs) 01:26, 1 September 2011 (UTC)
- Yep. It's pretty pointless now. And almost all the math is unsourced. (and/or marketing figures) And listing something like "400.4 Gigaflops" for floating point operations isn't informative at all, when we consider the fact that these days, most gpus list flops associated with raw shaders, without sifting the gpu for every last number to count. (as is the case in this article)
- Like here for example: http://en.wikipedia.org/wiki/GeForce_400_Series#GeForce_GT_440
- RSX would "seem" to hold up pretty well in Gflops compared to newer more advanced gpus with more than 3 times the transistors, clocked much faster, etc.. But we know better than that.
- Not sure about listing the clock changes without a source, but alot of it isn't informative, and is just misleading. Swapnil 404 (talk) 04:05, 27 September 2011 (UTC)
Versions
[edit]Wouldn't it be a good idea to add a list or at least some information about the different iterations of RSX (90nm, 65nm, 40nm) and their associated die sizes, power dissipation, etc.? The Seventh Taylor (talk) 03:00, 12 March 2012 (UTC)
why is effective memory 1.4 ghz
[edit]if the gddr3 clocks at 650 mhz wouldnt it be 1.3 ghz — Preceding unsigned comment added by 92.216.171.153 (talk) 01:52, 12 January 2018 (UTC)