User talk:MaxDZ8
Reply on User comment in Talk:Shader
[edit]Thank you too for always maintaining the page. I am also not sure what is the best, but we are humans and we keep trying to do the best (though it will never happens). Cheers....^^. Wish you good days too....Draconins 14:34, 3 August 2006 (UTC)
Re: "Interesting Stream GPU" History Removal and rewrite.
[edit]I removed the hostory because some of it is incorrect. To Note:
1.) NV2x had had programmable control for both Vertex and Fragment operations, not just vertex operations as the history states. 2.) There is no such thing as "RD3xx" - its the R3xx series (Radeon 9700). Branching was support was static only in the fragment pipeline. 3.) While NV4x had conditional branching in fragment pipeline, its granularity was very large (4K pixels for NV4x, 1K for G7x) limiting the usefulness.
The point of the update was to provide a little more structure around the history of graphics processors by linking them to their DirectX functionality (Wiki has plenty of entries on DX so these can be referenced) and to also point out that its only really Shader Model 3.0 parts that will provide much in the way of usefulness when it comes to Stream Processing outside of standard graphics processing.
Thank you for discussing there the issue.
My proposed version
[edit]- GPUs are recognized as widespread, consumer-grade stream processors. Although they are usually limited in hardware functionalities, a great deal of effort is spent on optimizing algorithms for this family of processors, which usually have very high horsepower. Various generations are to be noted by a stream processing point of view.
- Pre-NV2x: no explicit support for stream processing. Kernel operations are usually hidden in the API and provide too little flexibility for general use.
- NV2x: kernel stream operations are now explicitly under the programmer's control but only for vertex processing (fragments are still using old paradigms). No branching support severely hampers flexibility but some algorithms can be run (notably, low-precision fluid simulation).
- RD3xx: increased performance and precision with limited support for branching/looping in both vertex and fragment processing. The model is now flexible enough to cover many purposes, especially on vertex processing level supporting dynamic branching.
- NV4x: actually (September 25, 2005) state of the art. Very flexible branching support although some limitations still exists on the number of operations to be executed and strict recursion depth. Performance is estimated to be from 20 to 44GFLOPs.
Your proposal, being discussed
[edit]- GPUs are becoming more recognized as widespread, consumer-grade stream processors. GPU’s based around Microsoft’s DirextX8 API, such as ATI’s R200 or NVIDIA’s NV20, introduced some programmability into the fragment pipeline however it was so limited the only use was specifically for 3D graphics related applications. Later, with graphics processors conforming to the DirectX9 specification, increased programmability was introduced into the pipeline and the Shader Model 3.0 specification demanded FP32 precisions throughout and fully conditional branching in both the Vertex and Fragment pipelines, thus making them more attractive for potential non-graphics uses.
Although several generations of parts confirming to the Shader Model 3.0 specification came from ATI (R520, R580) and NVIDIA (NV40, G70, G71), only ATI’s R580 graphics processor has thus far gained much traction in applications outside of standard 3D graphics, with commercial applications such as those provided by Peakstream, and research with the distributed computing application Folding@Home. This can be attributed not just to its programmable performance, rated at 374GFLOPs @ 650MHz, but also its finer thread sizing, benefiting dynamic branching performance, and handing of available register space in comparison to other Shader Model 3.0 compliant architectures. [1]
A first difference is on the structure as you say. Although your version reinforces structure in the page, your reinforces structure between pages. I agree shader models should be mentioned but I still think the previous structure to be better. This gives the best of both worlds.
I don't think there's the need strict for SM3.0, it makes it much easier but saying it's a must have is something I feel definetly excessive. On NV1x generation, a few managed to run even full radiosity processes. I definetly disagree on SM3.0 as a need.
On your comment
[edit]“ | NV2x had had programmable control for both Vertex and Fragment operations, not just vertex operations as the history states. | ” |
Register combiners were definetly programmable and very powerful when coupled with texture_shader extension but as you know, it's a long way from NV_vertex_program (vertex programmability). Register combiners do fall in the Pre-NV2x class (it's really more a fixed function pipe and in fact NV20's functionality was just a overhauled NV_registers_combiners from NV1x - so saying you're right this means NV1x also was programmable). I think the previous version to be right here.
“ | There is no such thing as "RD3xx" - its the R3xx series (Radeon 9700). Branching was support was static only in the fragment pipeline. | ” |
Minor naming issue. I Agree to change it, but I'll still personally go for RD3xx (hi-end chip) and RV3xx... (value-driven, like 9600). True for dynamic branching in Vertex processing vs pixel processing. Considering however most interesting decisions happen in the FS, I believe this is a minor issue... I agree this needs to be clarified.
“ | While NV4x had conditional branching in fragment pipeline, its granularity was very large (4K pixels for NV4x, 1K for G7x) limiting the usefulness. | ” |
Trascurable performance issue. To the programmer interested in this kind of things, the really important thing is that it works. Please note the history is actually feature-driven rather than performance-driven. IMHO even with limited performance the thing is indeed useful and would likely turn out to be faster than CPU processing anyway. R5xx chips are just catching up with NV4x-G70 (I don't really care for alpha-to-coverage AA) with improved speed. Not a bad thing but not even an important improvement. I believe this shall be kept.
On the edit itself
[edit]“ | R200 or NVIDIA’s NV20, introduced some programmability into the fragment pipeline however it was so limited the only use was specifically for 3D graphics related applications... | ” |
Just not true, but likely to be non-proofable now in both directions. As said above, the fragment processing was really a improved fixed pipe rather than a programmable one. I also disagree researchers began to be interested in GPGPU when SM3.0 was released.
“ | only ATI’s R580 graphics processor has thus far gained much traction in applications outside of standard 3D graphics, with commercial applications such as those provided by Peakstream, and research with the distributed computing application Folding@Home | ” |
Just plain out wrong. See the nvidia developer pages as well as GPU Gems 1 and 2, and various NVSDK examples. Ati's 580 is undoubtly the first card to use pro apps to provide wow-factor, a questionable marketing strategy.
“ | This can be attributed not just to its programmable performance, rated at 374GFLOPs @ 650MHz, but also its finer thread sizing, benefiting dynamic branching performance, and handing of available register space in comparison to other Shader Model 3.0 compliant architectures. | ” |
http://www.techreport.com/etc/2006q4/stream-computing/index.x?pg=3: plain and simple, the article is out of topic. GPU folding@home is still beta. I would wait more accurate estimations on a less controlled environment before this can be elected as a winner.
Long story short
[edit]I realize your intentions are good and some propositions are definetly an improvement.
I still think the previous version to be a better one and I will RV your change 3 days from now.
Thank you again for discussing the isse there instead of messing up the talk page.
MaxDZ8 talk 06:48, 13 October 2006 (UTC)
RE (WipEout!)
[edit]“ | Register combiners were definetly programmable and very powerful when coupled with texture_shader extension but as you know, it's a long way from NV_vertex_program (vertex programmability). Register combiners do fall in the Pre-NV2x class (it's really more a fixed function pipe and in fact NV20's functionality was just a overhauled NV_registers_combiners from NV1x - so saying you're right this means NV1x also was programmable). I think the previous version to be right here. | ” |
Although NV1x and NV2x both feature some limited register combiner functionality, NV2x did also extend the programming model slightly with the introduction of FX8 ALU’s. NV2x conforms to the (albeit limited) PS1.1 programming model, whereas NV1x’s register combiners didn’t. NV1x was basically a Dot3 product part, while NV2x has PS1.1 capabilities, which go beyond Dot3.
R200 should also get a mention as supporting PS1.4 it (relatively, in DX8 terms) significantly increased the flexibility and capabilities of the fragment pipeline as well as introducing FX16 precision ALU’s.
- Look, do whatever you want with this. I remember there was a few difference but I have no time to check. I still believe this shall be in GPGPU however.
- MaxDZ8 talk 07:57, 14 October 2006 (UTC)
“ | Minor naming issue. I Agree to change it, but I'll still personally go for RD3xx (hi-end chip) and RV3xx... (value-driven, like 9600). | ” |
The point I’m making here is that there is no such part as “RD3xx”, Its R300 or R3xx – R300 is Radeon 9700.
- I understood this perfectly, in fact, I integrated.
- MaxDZ8 talk
“ | True for dynamic branching in Vertex processing vs pixel processing. Considering however most interesting decisions happen in the FS, I believe this is a minor issue... I agree this needs to be clarified. | ” |
You are correct that the Fragment Shaders are generally more important for GPGPU purposes, which makes it more important in the context of the article to be correct and state that dynamic branching and looping is not available in the fragment shaders on these parts. (Although linking it to DirectX and Shader Model 2.0 will point out the true capabilities).
- Which happen to be a mess... this is really meant to be a quick overview fitting a few lines. I believe those should be better in GPGPU. Let's try to keep the two articles distinct.
- MaxDZ8 talk 07:57, 14 October 2006 (UTC)
“ | Trascurable performance issue. To the programmer interested in this kind of things, the really important thing is that it works. Please note the history is actually feature-driven rather than performance-driven. IMHO even with limited performance the thing is indeed useful and would likely turn out to be faster than CPU processing anyway.
R5xx chips are just catching up with NV4x-G70 (I don't really care for alpha-to-coverage AA) with improved speed. Not a bad thing but not even an important improvement. I believe this shall be kept. |
” |
Features and performance are always important. However, in applications such as these how the features are achieved are of vital importance, and exactly why one architecture is getting more traction than another.
- See overview issue. Performance is definetly less important than features on design terms since it's usually evaluated after the design has been chosen. Do I need a citation here?
- MaxDZ8 talk 07:57, 14 October 2006 (UTC)
While R5xx chips came later than NV4x/G7x chips and brought, ostensibly, the same feature set, the architecture differences between them are directly relate to the applicability in these types of applications. The problem with G7x is that they are only handling a single, very large, pixel batch per quad in order to hide texture latency, meaning that with longer shaders using more register space the thread size has to decrease (when the register space fills) making it impossible to hide the texture latency. R5xx has a threading mechanism where is juggles up to 128 (fine grain) threads per quad – when register space is used up then there are just fewer threads, but there is more likely to be other threads that can be scheduled for ALU work.
- See again the overview issue. Again, I believe this has better place in GPGPU and not there.
Mike Houston points out here, the how these architectural differences affects performance specifically for GPGPU applications:
“Mike Houston: All GPUs are SIMD, so branching has a performance consequence. We have carefully designed the code to have high branch coherence. The code heavily relies on a tremendous amount of looping in the shader. On ATI, the overhead of looping and branching can be covered with math, and we have lots of math. We run the fragment shaders pretty close to peak for the instruction sequence used, i.e. we can't fully use all the pre-adders on the ALUs. But, I wouldn't say branching is the enabler. I'd say the incredible memory system and threading design is what currently make the X1K often the best architecture for GPGPU. Those allow us to run the fragment engines at close to peak.
What ATI can do that NVIDIA can't that is currently important to the folding code being run is that we need to dynamically execute lots of instructions per fragment. On NVIDIA, the shader terminates after 64K instructions and exits with R0->3 in Color[0]->Color[3]. So, on NVIDIA, we have to multi-pass the shader, which crushes the cache coherence and increases our off-chip bandwidth requirements, which then exacerbates the below.
The other big thing for us is the way texture latency can be hidden on ATI hardware. With math, we can hide the cost of all texture fetches. We are heavily compute bound by a large margin, and we could actually drive many more ALUs with the same memory system. NVIDIA can't hide the texture latency as well, and perhaps more importantly, even issuing a float4 fetch (which we use almost exclusively to feed the 4-wide vector units) costs 4 cycles. So NVIDIA's cost=ALU+texture+branch, whereas ATI is MAX(ALU, texture, branch).”
Although there will be applications where G7x will outperform a CPU, Mike shows the performance implications the G7x architecture has on GROMACS calculations on page 32 of this presentation - G70’s architecture in this application is providing less performance than a P4 3.0GHz while R520 is 2.5-3.5x faster.
- I want to state it clear: I have no doubt there's a significant performance difference but again, this is on the programming model and it's meant to be an introduction. It would be desiderable to put all the detail we can but unluckly, human brains can mangle 7+-2 informations on average so we need to keep it simple.
- MaxDZ8 talk 07:57, 14 October 2006 (UTC)
“ | Just plain out wrong. See the nvidia developer pages as well as GPU Gems 1 and 2, and various NVSDK examples. Ati's 580 is undoubtly the first card to use pro apps to provide wow-factor, a questionable marketing strategy. | ” |
I agree that there has been plenty of tests and applications that have experimented with GPU’s for general purpose applications, however the point I’m trying to convey is that its only now, with Shader Model 3.0 and the specifics of the R5xx architecture are we beginning to see some actual commercial and end user applications, that are providing useful improvements over CPU processing. I don’t agree with the notion of just dismissing this as marketing.
- Mass-marketing didn't happen now because of PS3.0 but because PS3.0 is becoming commodity. I believe the model "by itself" didn't attract more attention than previous models (normalizing on avalability of course). Also, this fits perfectly in the bigger picture.
- MaxDZ8 talk 07:57, 14 October 2006 (UTC)
“ | plain and simple, the article is out of topic.
GPU folding@home is still beta. I would wait more accurate estimations on a less controlled environment before this can be elected as a winner. |
” |
I don’t understand how the article linked is out of topic? I thought it was wholly relevant to the topic. Although perhaps some of the material from Mike Houston maybe more so?
- It is not out of topic, it is that this is more on Stream Processing that on GPGPU details. When I first wrote it, I considered not even mentioning GPUs... since I believe GPUs to be the best stream processors available I took a note of it but please, consider disclosing information in a following step for interested people. MaxDZ8 talk 07:57, 14 October 2006 (UTC)
I’d also not be so keen to dismiss Folding just because it’s a Beta. Yes, it beta but this is now something that anyone can download and access and see real practical utilization of GPGPU/stream processing on their graphics cards. Stanford / the Folding project are currently getting useful and data from the client as well – so far they are receiving 29TFLOPs of active processing power, nearly a fifth of the processing power of all the active CPU’s, but from merely 442 GPU’s, providing over 70x the processing power of all the active CPU’s.
- See again overview issue. By the way, who should see what?
- MaxDZ8 talk 07:57, 14 October 2006 (UTC)
Stream Processing
[edit]Hi, Can you please apply the same rigor and clean up the Interesting Stream Processors section on the [[2]] page. I would love to copy the text on the GPU side over but I think this
deserves better treatment.
Thx!
Yes, it's something I'm considering since a bit of time. This stuff on Imagine has definetly grown out of control.
For the time being, the first thing is the merge.
I must say I have been pretty busy in the last weeks so it'll take a bit of time.
MaxDZ8 talk 07:52, 24 October 2007 (UTC)
Tesla Roadster
[edit]Hi, I fixed one of your edits on the Tesla Roadster talk page that Ilokjju had vandalized. I want to give you a heads up on the page and article that I suspect a sock puppeteer (Curaralhos, Ilokjju, DrPersti, ElonMusky, Mu8sky, Rogerstone, Prof nomamescabron, Prof Bujju, and maybe Uramanbfas, Prof Schnitzer, and 216.180.72.14) is editing it. I also noticed one possible puppet, Maraimo, vandalized Folding@home. Kslays 21:08, 15 December 2006 (UTC)
(Note: I'm uncertain on where to put this, so I also posted on your talk page)
Thank you for your work on the Talk:Tesla Roadster page! I see this user is really doing something unwanted and hardly tolerable. Should we ask a user block (for the little it can do)? I'm not really used to such things and I'm not sure I understand what the above means. Is there something I can do?
MaxDZ8 talk 10:05, 17 December 2006 (UTC)
Thanks for the notice on my talk page, but we can continue this discussion here or on the Talk:Tesla Roadster page because I added them to my watchlist. I didn't know what to do either, so after poking around the Wikipedia policy websites I posted it on the noticeboard: http://en.wikipedia.org/wiki/Wikipedia:Administrators%27_noticeboard/Incidents#Possible_Sock_Puppet They blocked a few of them, but not all. I guess it's hard to tell if they are really all one person. I'm also a fan of the Roadster and don't think the "Criticism" and "Rebuttal" pages should be there at all, especially considering they were largely contributed by this author. Maybe we should be bold in editing and just remove them? -Kslays 15:56, 17 December 2006 (UTC)
Parallax Occlusion Mapping
[edit]I see you're writing something on Parallax Occlusion Mapping, something my wife came up with in 2003 or so. http://thinktank.eos4life.com/shaderdev.html If you need any info about it my e-mail is at|-|e|\|@e0s4life.c0/\/\, I can just pass it along to her. Eos4life 07:40, 24 December 2006 (UTC)
I've dropped you a mail. I know mailservers around here are a bit busy at discarding legal mails so if you don't get it let me know.
I don't know how you deal with SPAM but I feel better with your mail obscured a bit. I hope this won't be a problem for you. Yes, it scares me quite a bit!
Note: |-| is 'h', |\| is 'n', zero was 'o', /\/\ was 'm'.
MaxDZ8 talk 12:40, 24 December 2006 (UTC)
Yea, I got the mail although it did come though detected as spam. You're right, and thank you for changing it. I was actually just about to do so myself. Eos4life 04:41, 25 December 2006 (UTC)
WP:ATT
[edit]Can you link to the removal of referenced material you talked about on the help desk? - Mgm|(talk) 12:00, 3 March 2007 (UTC)
Reference deleted: here. I have reverted this myself but the problem seems to arise again and again. In slightly different flavours: people doesn't seem to care about references.
For easiness, here's the reply I just posted on the WP:HD.
The article I am speaking about it's very technical (shader). The specific "removed reference page" was this (there's a quick overlook on this edit run on talk:shader). The "false referencing" issue is visible in the last version of shader as well - you can see the references are currently "unreferenced" by the text: in fact, they shall be deleted. This opens the "reference checking" problem...
Being a term used for marketing there's a lot of hype around this stuff and I understand the "feel" by some users but again, this is stretching it.
Something I forgot to put on the HD: the problem isn't on the edit itself (which has been reverted) but on the fact similar problems arise at least once per month (in fact, the article is now a mess, compare it to the versions 1 year ago!) MaxDZ8 talk 07:53, 5 March 2007 (UTC)
What do you have against video games?
[edit]I wonder why you have reverted adding both OpenGL and GPGPU to Wikipedia:WikiProject Video games? And yes, "GL is not just for games man!", but OpenGL is used in games now. It's not only used in games and the {{cvgproj}} template doesn't say that. That template is on the talk page, not the article. It will only bring more attention to these articles, which is usually a good thing. Can you provide a good reason for keeping these articles out of the video games project? --Imroy 16:14, 21 March 2007 (UTC)
Because many people does have this kind of prejudice. I understand perfectly it can be used also in VG but I don't see any sense to add something like that to an API. Same appplies to GPGPU which is paradigm-like. If this should be added, C/C++, Java, .NET, Win32 should also be, maybe also MS Windows, Linux and MacOS. It simply does not make any sense at all.
MaxDZ8 talk 05:14, 22 March 2007 (UTC)
Diesel cycle
[edit]Regarding the image at diesel cycle. The small v (specific volume) is necessary to normalize it. I'll look into a way of explaining it concisely, I don't fully understand it myself yet.
The curvature of the cycle is fine, as it is only a representation of the ideal cycle. A good image for comparison is [3]; even though the curves are a bit iffy it is very informative, if you could put in heat in, heat out and work in and work out on your image, like on the linked image that would be great. Then I'll write a summary of the image and processes represented in the image. Thanks! Lkleinjans 12:27, 22 May 2007 (UTC)
Ok, I'll leave the curvature "ideal" to stress more the difference in theory and implementation.
Added, as in the picture provided, Win, Wout, Qin, Qout.
I'll add arrows in the next few days - looks like I'm unable to design one without getting it skewed.
For the time being, that's it.
As for the volume, it's actually 'v' (lowercase) but it doesn't look like it because of the lack of comparison letters. I have considered using a different font which looked more "italic" but they all end up in looking more like 'u'.
Thank you for the tips, it's exactly what I was looking for.
MaxDZ8 talk 17:37, 22 May 2007 (UTC)
Superb! I have added the image + explanation. When you add the arrows could you change v: volume to v: specific volume as this is more accurate. Thanks Lkleinjans 21:47, 22 May 2007 (UTC)
Ok, here we go! Maybe some subscripts should be bigger and the arrows smaller but it seems rather good to me right now. Feel free to propose improvements! Since we're here, it's better to get it right once for ever.
MaxDZ8 talk 17:18, 23 May 2007 (UTC)
This version is as good as it gets! No further improvements needed. Thanks Lkleinjans 22:15, 25 May 2007 (UTC)
Shader (disambiguation)
[edit]A {{prod}} template has been added to the article Shader (disambiguation), suggesting that it be deleted according to the proposed deletion process. All contributions are appreciated, but this article may not satisfy Wikipedia's criteria for inclusion, and the deletion notice explains why (see also "What Wikipedia is not" and Wikipedia's deletion policy). You may contest the proposed deletion by removing the {{dated prod}}
notice, but please explain why you disagree with the proposed deletion in your edit summary or on its talk page. Also, please consider improving the article to address the issues raised. Even though removing the deletion notice will prevent deletion through the proposed deletion process, the article may still be deleted if it matches any of the speedy deletion criteria or it can be sent to Articles for Deletion, where it may be deleted if consensus to delete is reached. If you endorse deletion of the article, and you are the only person who has made substantial edits to the page, please tag it with {{db-author}}. JHunterJ 14:27, 8 September 2007 (UTC)
The article Shader (realtime, logical) has been proposed for deletion because of the following concern:
- abandoned technical essay which doesn't provide any useable content beyond that already offered by shader
While all constructive contributions to Wikipedia are appreciated, content or articles may be deleted for any of several reasons.
You may prevent the proposed deletion by removing the {{proposed deletion/dated}}
notice, but please explain why in your edit summary or on the article's talk page.
Please consider improving the article to address the issues raised. Removing {{proposed deletion/dated}}
will stop the proposed deletion process, but other deletion processes exist. In particular, the speedy deletion process can result in deletion without discussion, and articles for deletion allows discussion to reach consensus for deletion. Chris Cunningham (user:thumperward) (talk) 12:27, 28 April 2014 (UTC)
Hi,
You appear to be eligible to vote in the current Arbitration Committee election. The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to enact binding solutions for disputes between editors, primarily related to serious behavioural issues that the community has been unable to resolve. This includes the ability to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail. If you wish to participate, you are welcome to review the candidates' statements and submit your choices on the voting page. For the Election committee, MediaWiki message delivery (talk) 13:06, 23 November 2015 (UTC)
Hi,
You appear to be eligible to vote in the current Arbitration Committee election. The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to enact binding solutions for disputes between editors, primarily related to serious behavioural issues that the community has been unable to resolve. This includes the ability to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail. If you wish to participate, you are welcome to review the candidates' statements and submit your choices on the voting page. For the Election committee, MediaWiki message delivery (talk) 13:32, 23 November 2015 (UTC)