Talk:OpenCL
This is the talk page for discussing improvements to the OpenCL article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to multiple WikiProjects. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Given that
[edit]Given that Snow Leopard was only officially announced today, and OpenCL as part of it, we are likely to have more detail on the API in the coming days. At this point in time this entry is only suitable as a stub. I have added a reference to an Apple press release, but it is very light on substance in this regards. --AJ Mas (talk) 02:55, 10 June 2008 (UTC)
- I ref'd some other stuff, but I'm not sure if it satisfies replacing {{unreferenced}} with {{stub}}. P.S. Gotta love how Apple PR is abusing "Open" and "standard" in the passive voice "has been proposed as an open standard" b.s. meaningless PR speak, but I don't know how to [roll eyes] when quoting a press release. -- Skierpage (talk) 05:59, 10 June 2008 (UTC)
- Thanks for fixing this page up! It is all valid information that's been collected; the Open-ness will be revealed or Apple will be mocked. Just "has been proposed as an open standard" doesn't cut it -- we need to find out to which standards body this language has been submitted for proposed adoption. --Ncr100 (talk) 06:36, 10 June 2008 (UTC)
There is a discussion thread at reddit. It doesn't provide anything concrete, hence the inclusion in the discussion page, rather that the article. If the thread is anything to go by it would seem that the details are still being obscured by an NDA --AJ Mas (talk) 02:05, 11 June 2008 (UTC)
Another reference: http://www.betanews.com/article/So_what_is_OpenCL_Apples_next_enhancement_to_Mac_OS_X_106/1213196124 --AJ Mas (talk) 18:20, 13 June 2008 (UTC)
- "As of Jan 2009, there are no closed source or open source implementations made." Does anyone have any links to sources for potential/tentative release dates for an SDK of any kind for OpenCL? searched around a lot and can't find anything —Preceding unsigned comment added by 84.9.160.58 (talk) 03:20, 22 March 2009 (UTC)
Categoty:Graphics Software?
[edit]Although OpenCL is designed for GPUs, categorizing it as "Graphics Software" seems misleading: it's design purpose has been general-purpose computing on GPUs, not graphics-related computation (although this is for sure possible). -- 89.247.73.179 (talk) 11:53, 10 December 2008 (UTC)
Quality
[edit]The article needs to be re-writing to get better, something like the articles of OpenGL and OpenAL. Exemple:
Developer(s) | Apple Inc. |
---|---|
Stable release | 1.0
/ December 8, 2008 |
Operating system | Cross-platform |
Type | API |
License | Various |
Website | http://www.khronos.org/opencl/ |
201.36.251.183 (talk) 19:31, 4 January 2009 (UTC)
- once it's fully released it will get more attention ;) Markthemac (talk) 06:22, 20 April 2009 (UTC)
Open Standard
[edit]I recognize that Apple was the founder to OpenCL and they own trademarks and so forth but since the Khronos Group has taken over, is it really correct to list Apple as the developer in the info box? -- Henriok (talk) 14:47, 26 August 2009 (UTC)
Competitors
[edit]i think that OpenCL has 1 Competitor and that is DirectCompute, where as the article lists CUDA as one...
OpenCL and DirectCompute are competitors because the code from the one (OpenCL for example) will work on Any GPGPU graphics cards CUDA and Stream (by nVidia\ATI respectively)are proprietary, and will work on that companies processors
— Preceding unsigned comment added by 72.160.132.26 (talk) 03:32, 24 April 2010 (UTC)
broken link
[edit]the link to the general purpose computing benchmark is broken. is there an official benchmark that can be downloaded and run somewhere? —Preceding unsigned comment added by 128.214.3.55 (talk) 07:29, 27 August 2010 (UTC)
Uses?
[edit]Has OpenCL been used for anything? Is there software I can run on my GPU through OpenCL? —Darxus (talk) 07:13, 6 January 2011 (UTC)
- IIRC Apple added or will add optimizations based on LLVM and OpenCL to MacOS X software. I am totally unsure about this though, but I'm too lazy to recheck the facts :) 1exec1 (talk) 00:28, 8 January 2011 (UTC)
- People most interested in running OpenCL are researchers in computer science, mathematics and related subjects, as the outrageous processing power would give processing power 27 times that of an $1000.00 i7 (very rough estimation). Such power come at the cost of long programing and debugging processes. The particulars of hardware heterogeneity among end-users make it less practical for mass produced end-user applications. Besides, most end-user aplication developers are not "riding the waves crest" so to speak, they have long established production logistics and it takes them more time to incorporate technology then do streamlined research teams and groups. One technical end-user application with OpenCL implementations is Matlab and I am trying to use it myself. But I´ve already found out most implementations are in development, so they didnt reach all goal features and are highly technical to use. With stardard Matlab knowledge, you could use OpenCL hardware to do some matlab functions up to 100x faster than without it.--177.9.113.135 (talk) 21:41, 7 July 2011 (UTC)
Yes, OpenCL has been used in various fields. For example, there are several Bitcoin miners [1] Dneto.123 (talk) 20:30, 9 January 2014 (UTC)
Matlab toolbox
[edit]Besides the OpenCL toolbox cited in the article at OpenCL#Libraries, there is another Matlab implementation I would like to be added after checked by article writers: MATLAB Image Processing Toolbox (IPT)--177.9.113.135 (talk) 21:41, 7 July 2011 (UTC)
POV: CUDA section
[edit]The section about CUDA overstresses low level h/w tweaking and forgets portability issues. There are no sources for the conclusion, so I marked the section Template:POV-section. Rursus dixit. (mbork3!) 08:31, 4 May 2012 (UTC)
- Further remark: the two sources that are provided for the performance comparison between CUDA and OpenCL comes to two conclusions:
- the first one claims 30% better performance of CUDA,
- the second one claims "comparable", and that earlier comparisons were unfair.
- Both sources seems fair, both only treat performance, nothing else. The conclusions of the section Comparison with CUDA are not well supported by those sources. Rursus dixit. (mbork3!) 08:36, 4 May 2012 (UTC)
- The comparisons never compare CUDA with OpenCL. They compare NVIDIA's implementation of CUDA to NVIDIA's implementation of OpenCL. NVIDIA is, unsuprisingly, quite slow at releasing OpenCL updates (it took until Christmas for them to relase OpenCL 1.1). The languages are effectively the same so I wouldn't expect one to perform significantly better than the other except if a particular implementation is inferior. IRWolfie- (talk) 18:29, 4 May 2012 (UTC)
- Just a remark here that as important as portability is, I know that as a programmer I have a tendency to underestimate the importance of accessibility. At the end of the day, no matter how elegant or flexible your code is, if you do not have people packaging it up for non-programmers no one will be using it. As a younger physicist/programmer I know there are a tremendous number of older engineers/physicists who are definitely not programmers. They can only use what they know how to use, which quite often includes archaic FORTRAN or some MatLAB implementation. For them CUDA has the advantage that Nvidia has already had an army of developers come up with frontend solutions that fit straight into what they already know. I don't like that Nvidia did this instead of working within OpenCL but here we are. Portability vs. Accessibility.152.1.223.159 (talk) 18:09, 12 June 2013 (UTC)
- Dispute no longer seems active so I have tried to reflect the above discussion and have removed header. Apologies if I've missed the point. Servalo (talk) 09:02, 31 October 2013 (UTC)
Implementation
[edit]Hello,
I'd really like to know how OpenCL is actually implemented, not just the list of dates in which things were implemented.
Thanks.
- Anon— Preceding unsigned comment added by 131.179.212.102 (talk) 05:21, 9 June 2012 (UTC)
About implementation
[edit]- In fact, implementation could vary and is up to implementers how to do it. Standard only defines how things should work and what interfaces will be available. But anyone is free to implement it in any way they can imagine. There are many ways to achieve the same result. More than one different implementation exists. Some implementations are quite different from others. The nice thing is that all implementations are exposing more or less the same interface. So program do not have to know how exactly some implementation actually implements all things on actual hardware.
- Basically, OpenCL implementation does all job required to translate some abstract source code (which is flavor of C language) into native executable code which could be executed on target device(s). So, implementation should have some kind of compiler for some kind of supported architecture to be able to take OpenCL kernel source as input and compile it into some native executable code of target platform and then execute it.
- This approach allows programmers who write OpenCL computation kernels not to care on which target their OpenCL kernel would actually execute and which instructions set it uses: it's up to OpenCL implementation to handle this. It have to build OpenCL kernel to native code of target platform. OpenCL implementation exposes library function which does so. Implementer should also implement all required functions/features described in OpenCL standard, so programs would be presented standard OpenCL interface as described in standard.
- So it usually would be some kind of compiler which builds OpenCL kernels from provided source upon function call, some runtime which implements all required features and some glue which does some misc. things like actually uploading compiled code to device, starting it and fetching computation results, etc.
- As for GPUs, modern GPUs could be basically viewed as large array of SIMD-like ALUs. Exact implementation details are different across vendors so AMD and Nvidia have quite different architectures, where AMD sticks to larger numbers of simple processing elements while nvidia sticks to smaller number of more complicated "processors". Both approaches have their own strong and weak points. Massively parallel architecture allows GPUs to execute shaders at decent speeds, crunching huge amounts of data in massively parallel manner, crushing awesome amount of data per clock cycle (much more than any usual CPU could hope to do). So while GPU runs on 1GHz or less, it could easily outrun 4GHz CPUs on some code. Initially it has been designed to compute 3D scenes at decent speed. However, since processing units are SIMD ALUs, they have no trouble to execute generic code which is not about 3D at all. And you can be pretty sure that on some kinds of tasks which could run in parallel, GPUs would beat traditional CPUs. GPU haves very fast RAM on wide buses and large numbers of processing units so if task could run in parallel and does not requires awful amount of RAM (i.e. fits GPU RAM or even better local caches) it would skyrocket in computation speed. Sometimes it could be 20 times faster or so. On other hand, if algo can't run in parallel well or requires extensive execution flow control, it's not something that would benefit from running it on GPU. Each SIMD element is quite weak on it's own. It's massively parallel execution what makes GPU to shine. So if algo can't run well on many ALUs at same time, there will be no huge gain. Also, GPUs are optimized for applying operations to huge amount of data in more or less sequential ways, which is typical for 3D scenes. So it's not like if GPU would cope well with many jumps or other ways of changing code execution flow. They usually have much less blocks to handle these operation so GPUs aren't great on such algos. So, basically, GPUs are large number of relatively weak processors. OpenCL offers somehow standard, software-accessible interface for compiling and then running code on some hardware. Including this kind of hardware, too. Code obviously runs on SIMD engine, same thing which runs shaders to compute scenes when doing 3D. In fact, usually graphic computations and opencl could mix. There is some kind of resource manager/arbiter which allows them coexist and run on the same ALUs.
- — Preceding unsigned comment added by 195.210.145.75 (talk) 11:42, 20 September 2012 (UTC)
Unsure GPUs should be listed by date
[edit]I have a 2007-era GPU, and GPU Caps Viewer reports that it supports OpenCL 1.1, which was released in 2010. It does not appear that OpenCL support is strongly linked to when a GPU was released; therefore it seems unnecessary to state when an OEM produced their first OpenCL chips, unless it's qualified. — Preceding unsigned comment added by 209.162.33.89 (talk) 12:45, 25 March 2013 (UTC)
OpenCL for Windows XP
[edit]Since most OpenCL drivers are for no good reason very demanding to OS, they don't work on Windows XP and older CPUs, that makes developers not to consider OpenCL. I've found on the Internet that
- For old Intel processors (like P4) or on-board GPU (like Intel G33/31): ATI Stream SDK 2.3 is the last version that supports Windows XP SP3
OCTAGRAM (talk) 17:14, 23 May 2013 (UTC)
CUDA comparison
[edit]This reference is used to support the sentence "Two comparisons [60][61] suggest that if the OpenCL implementation is correctly tweaked to suit the GPU architecture they perform similarly". However, the reference says:
For all problem sizes, both the kernel and the end-to-end times show considerable difference in favor of CUDA. The OpenCL kernel’s performance is between about 13 % and 63% slower, and the end-to-end time is between about 16% and 67% slower. [...]
In our tests, CUDA performed better when transferring data to and from the GPU. We did not see any considerable change in OpenCL’s relative data transfer performance as
more data were transferred. CUDA’s kernel execution was also consistently faster than OpenCL’s, despite the two implementations running nearly identical code. CUDA seems to be a better choice for applications where achieving as high a performance as possible is important.
In my opinion, this reference does not support the sentence. Sancho 22:58, 12 December 2013 (UTC)
- I agreed with you and tried to make the summary more accurate. — brighterorange (talk) 16:07, 27 December 2013 (UTC)
- The paper doesn't include any real code, doesn't mention the used operating system, drivers, OpenCL or CUDA version. The paper also does not mention the date of publishing (it cites papers from 2010, so it must be from, at least, 2010) -- the only known about the test is that it's being done using NVIDIA tools and the data is based on execution of some test on a GeForce 260. While this might have been true when the paper was written, it is not possible, from the paper alone, to tell if it's relevant to anyone today -- if ever. I would ignore the paper entirely.
First sentence in wiki is misleading and confusing
[edit]It seems to me that the first sentence in the wiki is misleading and confusing as it seems to imply that FPGA's are processors (...." (FPGAs) and other processors"), which they are not. An FPGA is (basically) a digital hardware blank canvas within which a processor as well as just about all other digital hardware items that are not processors can be realized. Furthermore, it is unclear to me from reading this article as to whether OpenCl supports (or is envisioned to support at some time in the future) hardware programming of FPGAs (that is, what one would normally do by way of Verilog and/or VHDL). Given this, I propose the following candidates as a replacement for the wiki's first sentence:
Candidate A: Open Computing Language (OpenCL) is a framework for writing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), processors realized on field-programmable gate arrays (FPGAs), and other processors.
OR
Candidate B: Open Computing Language (OpenCL) is a framework (1) for writing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), processors realized on field-programmable gate arrays (FPGAs), and other processors; as well as (2) for describing hardware to be realized in programmable logic devices such as FPGAs.
I am betting that Candidate A is most correct. Any one else care to comment?
Thanks for reading, Bill Bartol
BillyBarty68 (talk) 19:29, 4 February 2015 (UTC)
Since there is (yet) no support for programming FPGAs using OpenCL I would get rid of it entirely. OpenCL is only meant to be executed on processors -- any FPGA with a processor realized will fit this description, but even then it still requires some support from the operating system used. — Preceding unsigned comment added by 212.130.79.38 (talk) 15:59, 5 May 2015 (UTC)
- Both Altera [1] and Xilinx [2] now support OpenCL for their FPGAs. Nubicles (talk) 22:29, 27 May 2015 (UTC)
References
Storing text from lead
[edit]Cut because it was redundant or introduces new material. May be reincorporated in body later one.
OpenCL can be used to give an application access to a graphics processing unit for non-graphical computing. Academic researchers have investigated automatically compiling OpenCL programs into application-specific processors running on FPGAs,[1] and commercial FPGA vendors are developing tools to translate OpenCL to run on their FPGA devices.[2][3] OpenCL can also be used as an intermediate language for directives-based programming such as OpenACC.[4][5][6] Sizeofint (talk) 18:45, 8 April 2015 (UTC)
References
- ^ Jääskeläinen, Pekka O.; de La Lama, Carlos S.; Huerta, Pablo; Takala, Jarmo H. (July 2010). "OpenCL-based design methodology for application-specific processors". 2010 International Conference on Embedded Computer Systems (SAMOS). IEEE: 223–230. doi:10.1109/ICSAMOS.2010.5642061. ISBN 978-1-4244-7936-8. Retrieved February 17, 2011.
- ^ Altera OpenCL
- ^ "Jobs at Altera". Archived from the original on July 21, 2011.
- ^ "Caps Raises The Case For Hybrid Multicore Parallel Programming". Dr. Dobb's. 17 June 2012. Retrieved 17 January 2014.
- ^ "Does the OpenACC API run on top of OpenCL?". OpenACC.org. Retrieved 17 January 2014.
- ^ Reyes, Ruymán; López-Rodríguez, Iván; Fumero, Juan J.; de Sande, Francisco (27–31 August 2012). accULL: An OpenACC Implementation with CUDA and OpenCL Support. EURO-PAR 2012 International European Conference on Parallel and Distributed Computing. doi:10.1007/978-3-642-32820-6_86. Retrieved 17 January 2014.
{{cite conference}}
: External link in
(help); Unknown parameter|conferenceurl=
|conferenceurl=
ignored (|conference-url=
suggested) (help)
External links modified
[edit]Hello fellow Wikipedians,
I have just added archive links to 3 external links on OpenCL. Please take a moment to review my edit. If necessary, add {{cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
- Added archive https://web.archive.org/20081218113648/http://www.hpcwire.com:80/topic/applications/RapidMind_Embraces_Open_Source_and_Standards_Projects.html to http://www.hpcwire.com/topic/applications/RapidMind_Embraces_Open_Source_and_Standards_Projects.html
- Added archive https://web.archive.org/20110804010819/http://developer.amd.com:80/documentation/articles/pages/OpenCL-and-the-AMD-APP-SDK.aspx to http://developer.amd.com/documentation/articles/pages/OpenCL-and-the-AMD-APP-SDK.aspx
- Added archive https://web.archive.org/20140116074408/http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6633603 to http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6633603
When you have finished reviewing my changes, please set the checked parameter below to true to let others know.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—cyberbot IITalk to my owner:Online 05:29, 24 January 2016 (UTC)
External links modified
[edit]Hello fellow Wikipedians,
I have just added archive links to 6 external links on OpenCL. Please take a moment to review my edit. If necessary, add {{cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
- Added archive https://web.archive.org/20110516092008/http://developer.amd.com:80/zones/OpenCLZone/courses/Documents/Introduction_to_OpenCL_Programming%20(201005).pdf to http://developer.amd.com/zones/OpenCLZone/courses/Documents/Introduction_to_OpenCL_Programming%20(201005).pdf
- Added archive https://web.archive.org/20090405072046/http://www.pcper.com:80/comments.php?nid=6954 to http://www.pcper.com/comments.php?nid=6954
- Added archive https://web.archive.org/20090809065559/http://developer.amd.com:80/GPU/ATISTREAMSDKBETAPROGRAM/Pages/default.aspx to http://developer.amd.com/GPU/ATISTREAMSDKBETAPROGRAM/Pages/default.aspx#one
- Added archive https://web.archive.org/20091202065250/http://www.s3graphics.com:80/en/news/news_detail.aspx?id=44 to http://www.s3graphics.com/en/news/news_detail.aspx?id=44
- Added archive https://web.archive.org/20110717054302/http://software.intel.com:80/en-us/articles/opencl-release-notes/ to http://software.intel.com/en-us/articles/opencl-release-notes/
- Added archive https://web.archive.org/20110906045531/http://developer.amd.com:80/documentation/articles/pages/OpenCL-and-the-AMD-APP-SDK.aspx to http://developer.amd.com/documentation/articles/pages/OpenCL-and-the-AMD-APP-SDK.aspx
When you have finished reviewing my changes, please set the checked parameter below to true to let others know.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—cyberbot IITalk to my owner:Online 23:50, 23 February 2016 (UTC)
Status 2017 Tom Stellard on XDC
[edit]See
https://www.x.org/wiki/Events/XDC2017/Stellard_GPGPU.pdf — Preceding unsigned comment added by 95.90.228.26 (talk) 16:28, 21 March 2018 (UTC)
POCL 1.1 available with some improvements
[edit]See
http://portablecl.org/pocl-1.1.html — Preceding unsigned comment added by 2A02:810B:C53F:B9E8:D54B:B274:9D38:538C (talk) 19:01, 21 March 2018 (UTC)
Open Source implementation - confusing wording; and Mesa status possibly outdated?
[edit]First it said the implementation was "formerly called CLOVER" citing a 2013 source, but then why did the 2018 sources still call it "Clover"? I feel like some sort of elaboration is required there.
And then the wording - things like "actual 1.1 incomplete, mostly done AMD Radeon GCN" are not really proper English, it reads more or less like a note than anything else. It also fails to mention the contribution of Collabora and Microsoft, along with the support of OpenCL 1.2 in the latest version. — Preceding unsigned comment added by Hch12907 (talk • contribs) 07:44, 9 October 2020 (UTC)
3.0.11 available with bugfixes since 6 May 2022
[edit]See
https://www.khronos.org/registry/OpenCL/ — Preceding unsigned comment added by 2A02:810B:4C3F:FE7C:B83B:3A54:8247:EB38 (talk) 09:17, 18 May 2022 (UTC)
- C-Class Computing articles
- Mid-importance Computing articles
- C-Class software articles
- Low-importance software articles
- C-Class software articles of Low-importance
- All Software articles
- C-Class Computer hardware articles
- Low-importance Computer hardware articles
- C-Class Computer hardware articles of Low-importance
- All Computing articles
- C-Class Apple Inc. articles
- Low-importance Apple Inc. articles
- WikiProject Apple Inc. articles
- C-Class C/C++ articles
- Unknown-importance C/C++ articles
- WikiProject C/C++ articles
- C-Class Technology articles
- WikiProject Technology articles
- Unknown-importance software articles
- C-Class software articles of Unknown-importance
- Unknown-importance Computing articles