Open-source artificial intelligence
Open-source artificial intelligence is an AI system that is freely available to use, study, modify, and share.[1] These attributes extend to each of the system’s components, including datasets, code, and model parameters, promoting a collaborative and transparent approach to AI development.[1]
Free and Open-Source Software (FOSS) licenses, such as the Apache License, MIT License, and GNU General Public License, outline the terms under which open-source artificial intelligence can be accessed, modified, and redistributed.[2]
The open-source model provides widespread access to new AI technologies, allowing individuals and organizations of all sizes to participate in AI research and development.[3][4] This approach supports collaboration and allows for shared advancements within the field of artificial intelligence.[3][4]
In contrast, closed-source artificial intelligence is proprietary, restricting access to the source code and internal components.[3] Only the owning company or organization can modify or distribute a closed-source artificial intelligence system, prioritizing control and protection of intellectual property over external contributions and transparency.[3][5][6]
Companies often develop closed products in an attempt to keep a competitive advantage in the marketplace.[6] However, some experts suggest that open-source AI tools may have a development advantage over closed-source products and have the potential to overtake them in the marketplace.[6][4]
Popular open-source artificial intelligence project categories include large language models, machine translation tools, and chatbots.[7]
For software developers to produce open-source artificial intelligence resources, they must trust the various other open-source software components they use in its development.[8][9]
Open-source artificial intelligence has been speculated to have potentially increased risk compared to closed-source artificial intelligence as bad actors may remove safety protocols of public models as they wish.[4]
History
[edit]While artificial intelligence has been developed through public research for most of history, the need emerged to distinguish between open-source and closed-source AI when industry research took over artificial intelligence research.
OpenAI was founded as a non-profit seeking to advance AI research while promoting safety by making its research available to the public.[10] In 2019, the company set up a “capped profit” structure to start receiving investments.[10] According to the company, the source code of the subsequently released GPT -2 was initially kept private, due to more risks it posed as the models became more powerful.[10] After OpenAI faced public backlash, however, it released the source code to GitHub three months after its release.[10]
The LF AI & Data Foundation, originally established as the LF Deep Learning Foundation in March 2018, expanded its focus in May 2019 to include various AI subfields, prompting a rebranding to the LF AI Foundation.[11] In October 2020, it merged with ODPi, an organization dedicated to advancing a big data software ecosystem, leading to its current name, the LF AI & Data Foundation, to emphasize the critical role of data in AI research and development.[11]
As of October 2024, the foundation comprised 77 member companies from North America, Europe, and Asia, and hosted 67 open-source software (OSS) projects contributed by a diverse array of organizations, including silicon valley giants such as Nvidia, Amazon, Intel, and Microsoft.[12] Other large conglomerates like Alibaba, Tiktok, AT&T, and IBM have also contributed.[12] Research organizations such as NYU, University of Michigan AI labs, Columbia University, Penn State are also associate members of the LF AI & Data Foundation.[12]
In September 2022, the PyTorch Foundation was established to oversee the widely-used PyTorch deep learning framework, which was donated by Meta.[13] The foundation's mission is to drive the adoption of AI tools by fostering and sustaining an ecosystem of open-source, vendor-neutral projects integrated with PyTorch, and to democratize access to state-of-the-art tools, libraries, and other components, making these innovations accessible to everyone.[14]
The PyTorch Foundation also separates business and technical governance, with the PyTorch project maintaining its technical governance structure, while the foundation handles funding, hosting expenses, events, and management of assets such as the project's website, GitHub repository, and social media accounts, ensuring open community governance.[14] Upon its inception, the foundation formed a governing board comprising representatives from its initial members: AMD, Amazon Web Services, Google Cloud, Hugging Face, IBM, Intel, Meta, Microsoft, and NVIDIA.[14]
In 2024, Meta released a collection of large AI models, including Llama 3.1 405B, comparable to the most advanced closed-source models.[15] The company claimed its approach to AI would be open-source, detracting from other major tech companies.[15] Controversially, Meta has been criticized for not being truly open-source.[16]
Applications
[edit]Machine learning
[edit]Open-source artificial intelligence has brought widespread accessibility to machine learning (ML) tools, enabling developers to implement and experiment with ML models across various industries. Sci-kit Learn, Tensorflow, and PyTorch are three of the most widely used open-source ML libraries, each contributing unique capabilities to the field.[17] Sci-kit Learn is known for its robust toolkit, offering accessible functions for classification, regression, clustering, and dimensionality reduction.[18] This library simplifies the ML pipeline from data preprocessing to model evaluation, making it ideal for users with varying levels of expertise.[18] Tensorflow, initially developed by Google, supports large-scale ML models, especially in production environments requiring scalability, such as healthcare, finance, and retail.[19] PyTorch, favored for its flexibility and ease of use, has been particularly popular in research and academia, supporting everything from basic ML models to advanced deep learning applications, and it is now widely used by the industry, too.[20]
Natural Language Processing
[edit]Large language models
[edit]Open-source AI has played a crucial role in developing and adopting of Large Language Models (LLMs), transforming text generation and comprehension capabilities. While proprietary models like OpenAI’s GPT series have redefined what is possible in applications such as interactive dialogue systems and automated content creation, fully open-source models have also made significant strides. Google’s BERT, for instance, is an open-source model widely used for tasks like entity recognition and language translation, establishing itself as a versatile tool in NLP.[21] Meta’s LLAMA (Large Language Model Meta AI) represents another breakthrough in open-source LLMs, offering a platform specifically designed to support research and domain-specific applications.[22] These open-source LLMs have democratized access to advanced language technologies, enabling developers to create applications such as personalized assistants, legal document analysis, and educational tools without relying on proprietary systems.[23]
This section needs to be updated.(November 2024) |
LLaMA
[edit]LLaMA is a family of large language models released by Meta AI starting in February 2023.[24] Meta claims these models are open-source software, but the Open Source Initiative disputes this claim, arguing that "Meta's license for the LLaMa models and code does not meet this standard; specifically, it puts restrictions on commercial use for some users (paragraph 2) and also restricts the use of the model and software for certain purposes (the Acceptable Use Policy)."[25]
Model | Developer | Parameter count | Context window | Licensing | Ref. |
---|---|---|---|---|---|
LLaMA | Meta AI | 7B, 13B, 33B, 65B | 2048 | [24] | |
Llama 2 | Meta AI | 7B, 13B, 70B | 4k | Custom Meta license | [26][27] |
Llama 3.1 | Meta AI | 8B, 70B, 405B | 128K | Meta Llama 3 Community License | [28][29][30] |
Llama 3.2 | Meta AI | 1B to 405B | Research-only | [31] | |
Mistral 7B | Mistral AI | 7 billion | 8k | Apache 2.0 | [32][33] |
Mixtral 8x22B | Mistral AI | 8×22B | Apache 2.0 | [31] | |
GPT-J | EleutherAI | 6 billion | 2048 | Apache 2.0 | [34] |
GPT-NeoX | EleutherAI | 20B | MIT License | [31] | |
Pythia | EleutherAI | 70 million - 12 billion | Apache 2.0 (Pythia-6.9B only) | [35][36] | |
T5 | Google AI | 60 million to 11 billion | Apache 2.0 | [31] | |
Gemma 2 | Google DeepMind | 2B, 9B, 27B | Apache 2.0 | [31] | |
OLMo | Allen Institute for AI | Various | Apache 2.0 | [31] | |
BLOOM | BigScience | 176 billion | OpenRAIL-M | [31] | |
StarCoder2 | BigCode | Various | Apache 2.0 | [31] | |
Falcon | Technology Innovation Institute | 7B, 40B | Apache 2.0 | [31] | |
Jamba Series | AI21 Labs | Mini to Large | Custom | [31] | |
Sea-Lion | AI Singapore | 7B | Custom | [31] | |
Qwen Series | Alibaba Group | 7B | Custom | [31] | |
Dolly 2.0 | Databricks | 12B | CC BY-SA 3.0 | [31] | |
Granite Series | IBM | 3B, 8B | Apache 2.0 | [31] | |
Phi-3 Series | Microsoft | Mini to Medium | MIT License | [31] | |
NVLM 1.0 Family | Nvidia | 72B | CC BY-SA 3.0 | [31] | |
RakutenAI Series | Rakuten | 7B | Custom | [31] | |
Grok-1 | xAI | 314B | Apache 2.0 | [31] |
Machine Translation
[edit]Open-source machine translation models have paved the way for multilingual support in applications across industries. Hugging Face’s MarianMT is a prominent example, providing support for a wide range of language pairs, becoming a valuable tool for translation and global communication.[37] Another notable model, OpenNMT, offers a comprehensive toolkit for building high-quality, customized translation models, which are used in both academic research and industries.[38] Alongside these open-source models, open-source datasets such as the WMT (Workshop on Machine Translation) datasets, Europarl Corpus, and OPUS have played a critical role in advancing machine translation technology.[39][40] These datasets provide diverse, high-quality parallel text corpora that enable developers to train and fine-tune models for specific languages and domains.[39]
Text-to-image models
[edit]Model | Developer | Parameter count | Licensing | Ref. |
---|---|---|---|---|
Stable Diffusion 3.5 | Stability AI | 2.5B to 8B | OpenRAIL-M | [31] |
IF | DeepFloyd | 400M to 4.3B | Custom | [31] |
Computer vision models
[edit]Open-source AI has led to considerable advances in the field of computer vision, with libraries such as OpenCV (Open Computer Vision Library) playing a pivotal role in the democratization of powerful image processing and recognition capabilities.[41] OpenCV provides a comprehensive set of functions that can support real-time computer vision applications, such as image recognition, motion tracking, and facial detection.[42] Originally developed by Intel, OpenCV has become one of the most popular libraries for computer vision due to its versatility and extensive community support.[41][42] The library includes a range of pre-trained models and utilities for handling common tasks, making OpenCV into a valuable resource for both beginners and experts of the field. Beyond OpenCV, other open-source computer vision models like YOLO (You Only Look Once) and Detectron2 offer specialized frameworks for object detection, classification, and segmentation, contributing to advancements in applications like security, autonomous vehicles, and medical imaging.[43][44]
Unlike the previous generations of Computer Vision models, which process image data through convolutional layers, newer generations of computer vision models, referred to as Vision Transformer (ViT), rely on attention mechanisms similar to those found in the area of Natural Language Processing.[45] ViT models break down an image into smaller patches and apply self-attention to identify which areas of the image are most relevant, effectively capturing long-range dependencies within the data.[45] This shift from convolutional operations to attention mechanisms enables ViT models to achieve state-of-the-art accuracy in image classification and other tasks, pushing the boundaries of computer vision applications.[46]
Model | Developer | Parameter count | Licensing | Ref. |
---|---|---|---|---|
SAM 2.1 | Meta | 38.9M to 224.4M | Apache 2.0 | [31] |
DeepLab | Not disclosed | Apache 2.0 | [31] | |
Florence | Microsoft | 0.23B, 0.77B | MIT License | [31] |
CLIP | OpenAI | 400M | MIT License | [31] |
Robotics
[edit]Open-source artificial intelligence has made a notable impact in robotics by providing a flexible, scalable development environment for both academia and industry.[47] The Robot Operating System (ROS) stands out as a leading open-source framework, offering tools, libraries, and standards essential for building robotics applications.[48] ROS simplifies the development process, allowing developers to work across different hardware platforms and robotic architectures.[47] Furthermore, Gazebo, an open-source robotic simulation software often paired with ROS, enables developers to test and refine their robotic systems in a virtual environment before real-world deployment.[49]
Healthcare
[edit]In the healthcare industry, open-source AI has revolutionized diagnostics, patient care, and personalized treatment options.[50] Open-source libraries like Tensorflow and PyTorch have been applied extensively in medical imaging for tasks such as tumor detection, improving the speed and accuracy of diagnostic processes.[51][50] Additionally, OpenChem, an open-source library specifically geared toward chemistry and biology applications, enables the development of predictive models for drug discovery, helping researchers identify potential compounds for treatment.[52] NLP models, adapted for analyzing electronic health records (EHRs), have also become instrumental in healthcare.[53] By summarizing patient data, detecting patterns, and flagging potential issues, open-source AI has enhanced clinical decision-making and improved patient outcomes, demonstrating the transformative power of AI in medicine.[53]
Military
[edit]Open-source AI has become a critical component in military applications, highlighting both its potential and its risks. Meta’s Llama models, initially restricted from military use, were adopted by U.S. defense contractors like Lockheed Martin and Oracle after unauthorized adaptations by Chinese researchers affiliated with the People’s Liberation Army (PLA) came to light.[54][55] Chinese researchers used an earlier version of Llama to develop tools like ChatBIT, optimized for military intelligence and decision-making, prompting Meta to expand its partnerships with U.S. contractors to ensure the technology could be used strategically for national security.[55] These applications now include logistics, maintenance, and cybersecurity enhancements.[55]
Concerns
[edit]Open-sourced development of AI has been criticized by researchers for additional quality and security concerns beyond general concerns regarding AI Safety.
Current open-source models underperform closed-source models on most tasks, but open-source models are improving faster to close the gap.[56]
Researchers have criticized open-source artificial intelligence for existing security and ethical concerns. An analysis of over 100,000 open-source models on Hugging Face and GitHub using code vulnerability scanners like Bandit, FlawFinder, and Semgrep found that over 30% of models have high-severity vulnerabilities.[57] Furthermore, closed models typically have fewer safety risks than open-sourced models.[4] The freedom to augment open-source models has led to developers releasing models without ethical guidelines, such as GPT4-Chan.[4]
Open-source development of models has been deemed to have theoretical risks. Once a model is public, it cannot be rolled back or updated if serious security issues are detected.[4] For example, Open-source AI may allow bioterrorism groups like Aum Shinrikyo to remove fine-tuning and other safeguards of AI models to get AI to help develop more devastating terrorist schemes.[58] The main barrier to developing real-world terrorist schemes lies in stringent restrictions on necessary materials and equipment.[4] Furthermore, the rapid pace of AI advancement makes it less appealing to use older models, which are more vulnerable to attacks but also less capable.[4]
In July 2024, the United States released a presidential report saying it did not find sufficient evidence to restrict revealing model weights.[59]
References
[edit]- ^ a b "The Open Source AI Definition – 1.0". Open Source Initiative. Retrieved 2024-11-14.
- ^ "Licenses". Open Source Initiative. Retrieved 2024-11-14.
- ^ a b c d Hassri, Myftahuddin Hazmi; Man, Mustafa (2023-12-07). "The Impact of Open-Source Software on Artificial Intelligence". Journal of Mathematical Sciences and Informatics. 3 (2). doi:10.46754/jmsi.2023.12.006. ISSN 2948-3697.
- ^ a b c d e f g h i Eiras, Francisco; Petrov, Aleksandar; Vidgen, Bertie; Schroeder, Christian; Pizzati, Fabio; Elkins, Katherine; Mukhopadhyay, Supratik; Bibi, Adel; Purewal, Aaron (2024-05-29), Risks and Opportunities of Open-Source Generative AI, arXiv:2405.08597
- ^ Isaac, Mike (2024-05-29). "What to Know About the Open Versus Closed Software Debate". The New York Times. Retrieved 2024-11-13.
- ^ a b c Solaiman, Irene (May 24, 2023). "Generative AI Systems Aren't Just Open or Closed Source". Wired.
- ^ Castelvecchi, Davide (29 June 2023). "Open-source AI chatbots are booming — what does this mean for researchers?". Nature. 618 (7967): 891–892. Bibcode:2023Natur.618..891C. doi:10.1038/d41586-023-01970-6. PMID 37340135.
- ^ Thummadi, Babu Veeresh (2021). "Artificial Intelligence (AI) Capabilities, Trust and Open Source Software Team Performance". In Denis Dennehy; Anastasia Griva; Nancy Pouloudi; Yogesh K. Dwivedi; Ilias Pappas; Matti Mäntymäki (eds.). Responsible AI and Analytics for an Ethical and Inclusive Digitized Society. 20th International Federation of Information Processing WG 6.11 Conference on e-Business, e-Services and e-Society, Galway, Ireland, September 1–3, 2021. Lecture Notes in Computer Science. Vol. 12896. Springer. pp. 629–640. doi:10.1007/978-3-030-85447-8_52. ISBN 978-3-030-85446-1.
- ^ Mitchell, James (2023-10-22). "How to Create Artificial intelligence Software". AI Software Developers. Retrieved 2024-03-31.
- ^ a b c d Xiang, Chloe (2023-02-28). "OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit". VICE. Retrieved 2024-11-14.
- ^ a b "New AI & Data Foundation Combines Industry's Fastest-Growing Open Source Developments in Artificial Intelligence and Open Data - Linux Foundation". www.linuxfoundation.org. Retrieved 2024-11-14.
- ^ a b c "LF AI & Data Landscape". LF AI & Data Landscape. Retrieved 2024-11-14.
- ^ "Announcing the PyTorch Foundation to Accelerate Progress in AI Research". Meta. 2022-09-12. Retrieved 2024-11-14.
- ^ a b c "PyTorch Foundation". PyTorch. Retrieved 2024-11-14.
- ^ a b Mirjalili, Seyedali (2024-08-01). "Meta just launched the largest 'open' AI model in history. Here's why it matters". The Conversation. Retrieved 2024-11-14.
- ^ Waters, Richard (2024-10-17). "Meta under fire for 'polluting' open-source". Financial Times. Retrieved 2024-11-14.
- ^ Dilhara, Malinda; Ketkar, Ameya; Dig, Danny (2021-07-23). "Understanding Software-2.0: A Study of Machine Learning Library Usage and Evolution". ACM Trans. Softw. Eng. Methodol. 30 (4): 55:1–55:42. doi:10.1145/3453478. ISSN 1049-331X.
- ^ a b Pedregosa, Fabian; Varoquaux, Gaël; Gramfort, Alexandre; Michel, Vincent; Thirion, Bertrand; Grisel, Olivier; Blondel, Mathieu; Prettenhofer, Peter; Weiss, Ron; Dubourg, Vincent; Vanderplas, Jake; Passos, Alexandre; Cournapeau, David; Brucher, Matthieu; Perrot, Matthieu (2011). "Scikit-learn: Machine Learning in Python". Journal of Machine Learning Research. 12 (85): 2825–2830. arXiv:1201.0490. Bibcode:2011JMLR...12.2825P. ISSN 1533-7928.
- ^ Abadi, Martín (2016-09-04). "TensorFlow: Learning functions at scale". Proceedings of the 21st ACM SIGPLAN International Conference on Functional Programming. ICFP 2016. New York, NY, USA: Association for Computing Machinery. p. 1. doi:10.1145/2951913.2976746. ISBN 978-1-4503-4219-3.
- ^ Paszke, Adam; Gross, Sam; Massa, Francisco; Lerer, Adam; Bradbury, James; Chanan, Gregory; Killeen, Trevor; Lin, Zeming; Gimelshein, Natalia (2019-12-08), "PyTorch: an imperative style, high-performance deep learning library", Proceedings of the 33rd International Conference on Neural Information Processing Systems, Red Hook, NY, USA: Curran Associates Inc., pp. 8026–8037, arXiv:1912.01703, retrieved 2024-11-15
- ^ Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (2019-05-24), BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, arXiv:1810.04805
- ^ Touvron, Hugo; Lavril, Thibaut; Izacard, Gautier; Martinet, Xavier; Lachaux, Marie-Anne; Lacroix, Timothée; Rozière, Baptiste; Goyal, Naman; Hambro, Eric (2023-02-27), LLaMA: Open and Efficient Foundation Language Models, arXiv:2302.13971
- ^ Chang, Yupeng; Wang, Xu; Wang, Jindong; Wu, Yuan; Yang, Linyi; Zhu, Kaijie; Chen, Hao; Yi, Xiaoyuan; Wang, Cunxiang; Wang, Yidong; Ye, Wei; Zhang, Yue; Chang, Yi; Yu, Philip S.; Yang, Qiang (2024-03-29). "A Survey on Evaluation of Large Language Models". ACM Trans. Intell. Syst. Technol. 15 (3): 39:1–39:45. arXiv:2307.03109. doi:10.1145/3641289. ISSN 2157-6904.
- ^ a b "Introducing LLaMA: A foundational, 65-billion-parameter language model". 2023-09-11. Archived from the original on 2023-09-11. Retrieved 2023-10-03.
- ^ "Meta's LLaMa 2 license is not Open Source". 20 July 2023.
- ^ "meta-llama/Llama-2-70b-chat-hf · Hugging Face". huggingface.co. Retrieved 2023-10-03.
- ^ "Llama 2 - Meta AI". ai.meta.com. Retrieved 2023-10-03.
- ^ "Meet Llama 3.1". Llama Meta. 2024-09-09. Retrieved 2024-09-09.
- ^ "Introducing Llama 3.1: Our most capable models to date". Meta. July 23, 2024.
- ^ "llama3/LICENSE at main · meta-llama/llama3". GitHub. Retrieved 2024-09-09.
- ^ a b c d e f g h i j k l m n o p q r s t u v w x Jason, Perlow (2024-11-06). "The best open-source AI models: All your free-to-use options explained". ZDNET. Archived from the original on 13 November 2024. Retrieved 2024-11-13.
- ^ "mistralai/Mistral-7B-v0.1 · Hugging Face". huggingface.co. Retrieved 2023-10-03.
- ^ AI, Mistral (2023-09-27). "Mistral 7B". mistral.ai. Retrieved 2023-10-03.
- ^ "EleutherAI/gpt-j-6b · Hugging Face". huggingface.co. 2023-05-03. Retrieved 2023-10-03.
- ^ Biderman, Stella; Schoelkopf, Hailey; Anthony, Quentin; Bradley, Herbie; O'Brien, Kyle; Hallahan, Eric; Mohammad Aflah Khan; Purohit, Shivanshu; USVSN Sai Prashanth; Raff, Edward; Skowron, Aviya; Sutawika, Lintang; Oskar van der Wal (2023-10-03). "[2304.01373] Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling". arXiv:2304.01373 [cs.CL].
- ^ "EleutherAI/pythia-6.9b · Hugging Face". huggingface.co. 2023-05-03. Retrieved 2023-10-03.
- ^ Junczys-Dowmunt, Marcin; Grundkiewicz, Roman; Dwojak, Tomasz; Hoang, Hieu; Heafield, Kenneth; Neckermann, Tom; Seide, Frank; Germann, Ulrich; Aji, Alham Fikri (2018-04-04), Marian: Fast Neural Machine Translation in C++, arXiv:1804.00344
- ^ Klein, Guillaume; Kim, Yoon; Deng, Yuntian; Senellart, Jean; Rush, Alexander M. (2017-03-06), OpenNMT: Open-Source Toolkit for Neural Machine Translation, arXiv:1701.02810
- ^ a b Aulamo, Mikko; Tiedemann, Jörg (September 2019). Hartmann, Mareike; Plank, Barbara (eds.). "The OPUS Resource Repository: An Open Package for Creating Parallel Corpora and Machine Translation Services". Proceedings of the 22nd Nordic Conference on Computational Linguistics. Turku, Finland: Linköping University Electronic Press: 389–394.
- ^ Koehn, Philipp (2005-09-13). "Europarl: A Parallel Corpus for Statistical Machine Translation". Proceedings of Machine Translation Summit X: Papers. Phuket, Thailand: 79–86.
- ^ a b Pulli, Kari; Baksheev, Anatoly; Kornyakov, Kirill; Eruhimov, Victor (June 2012). "Real-time computer vision with OpenCV". Communications of the ACM. 55 (6): 61–69. doi:10.1145/2184319.2184337. ISSN 0001-0782 – via ACM.
- ^ a b Culjak, Ivan; Abram, David; Pribanic, Tomislav; Dzapo, Hrvoje; Cifrek, Mario (21–25 May 2012). "A brief introduction to OpenCV". Proceedings of the 35th International Convention MIPRO – via IEEE.
- ^ Redmon, Joseph; Divvala, Santosh; Girshick, Ross; Farhadi, Ali (2016-05-09), You Only Look Once: Unified, Real-Time Object Detection, arXiv:1506.02640
- ^ facebookresearch/detectron2, Meta Research, 2024-11-16, retrieved 2024-11-16
- ^ a b Dosovitskiy, Alexey; Beyer, Lucas; Kolesnikov, Alexander; Weissenborn, Dirk; Zhai, Xiaohua; Unterthiner, Thomas; Dehghani, Mostafa; Minderer, Matthias; Heigold, Georg (2021-06-03), An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, arXiv:2010.11929
- ^ Khan, Salman; Naseer, Muzammal; Hayat, Munawar; Zamir, Syed Waqas; Khan, Fahad Shahbaz; Shah, Mubarak (2022-01-31). "Transformers in Vision: A Survey". ACM Computing Surveys. 54 (10s): 1–41. arXiv:2101.01169. doi:10.1145/3505244. ISSN 0360-0300.
- ^ a b Macenski, Steve; Foote, Tully; Gerkey, Brian; Lalancette, Chris; Woodall, William (2022-05-25). "Robot Operating System 2: Design, Architecture, and Uses In The Wild". Science Robotics. 7 (66): eabm6074. arXiv:2211.07752. doi:10.1126/scirobotics.abm6074. ISSN 2470-9476. PMID 35544605.
- ^ M, Quigley (2009). "ROS : an open-source Robot Operating System". Proc. Open-Source Software Workshop of the Int'l. Conf. On Robotics and Automation (ICRA), 2009.
- ^ Koenig, N.; Howard, A. (2004). "Design and use paradigms for gazebo, an open-source multi-robot simulator". 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566). Vol. 3. IEEE. pp. 2149–2154. doi:10.1109/iros.2004.1389727. ISBN 0-7803-8463-6.
- ^ a b Esteva, Andre; Robicquet, Alexandre; Ramsundar, Bharath; Kuleshov, Volodymyr; DePristo, Mark; Chou, Katherine; Cui, Claire; Corrado, Greg; Thrun, Sebastian; Dean, Jeff (January 2019). "A guide to deep learning in healthcare". Nature Medicine. 25 (1): 24–29. doi:10.1038/s41591-018-0316-z. ISSN 1546-170X. PMID 30617335.
- ^ Ashraf, Mudasir; Ahmad, Syed Mudasir; Ganai, Nazir Ahmad; Shah, Riaz Ahmad; Zaman, Majid; Khan, Sameer Ahmad; Shah, Aftab Aalam (2021). "Prediction of Cardiovascular Disease Through Cutting-Edge Deep Learning Technologies: An Empirical Study Based on TENSORFLOW, PYTORCH and KERAS". In Gupta, Deepak; Khanna, Ashish; Bhattacharyya, Siddhartha; Hassanien, Aboul Ella; Anand, Sameer; Jaiswal, Ajay (eds.). International Conference on Innovative Computing and Communications. Advances in Intelligent Systems and Computing. Vol. 1165. Singapore: Springer. pp. 239–255. doi:10.1007/978-981-15-5113-0_18. ISBN 978-981-15-5113-0.
- ^ Korshunova, Maria; Ginsburg, Boris; Tropsha, Alexander; Isayev, Olexandr (2021-01-25). "OpenChem: A Deep Learning Toolkit for Computational Chemistry and Drug Design". Journal of Chemical Information and Modeling. 61 (1): 7–13. doi:10.1021/acs.jcim.0c00971. ISSN 1549-9596. PMID 33393291.
- ^ a b Juhn, Young; Liu, Hongfang (2020-02-01). "Artificial intelligence approaches using natural language processing to advance EHR-based clinical research". Journal of Allergy and Clinical Immunology. 145 (2): 463–469. doi:10.1016/j.jaci.2019.12.897. ISSN 0091-6749. PMC 7771189. PMID 31883846.
- ^ Pomfret, James; Pang, Jessie; Pomfret, James; Pang, Jessie (2024-11-01). "Exclusive: Chinese researchers develop AI model for military use on back of Meta's Llama". Reuters. Retrieved 2024-11-16.
- ^ a b c Roth, Emma (2024-11-04). "Meta AI is ready for war". The Verge. Retrieved 2024-11-16.
- ^ Chen, Hailin; Jiao, Fangkai; Li, Xingxuan; Qin, Chengwei; Ravaut, Mathieu; Zhao, Ruochen; Xiong, Caiming; Joty, Shafiq (2024-01-15), ChatGPT's One-year Anniversary: Are Open-Source Large Language Models Catching up?, arXiv:2311.16989
- ^ Kathikar, Adhishree; Nair, Aishwarya; Lazarine, Ben (2023). "Assessing the Vulnerabilities of the Open-Source Artificial Intelligence (AI) Landscape: A Large-Scale Analysis of the Hugging Face Platform". 2023 IEEE International Conference on Intelligence and Security Informatics (ISI). pp. 1–6. doi:10.1109/ISI58743.2023.10297271. ISBN 979-8-3503-3773-0.
- ^ Sandbrink, Jonas (2023-08-07). "ChatGPT could make bioterrorism horrifyingly easy". Vox. Retrieved 2024-11-14.
- ^ "White House says no need to restrict open-source AI, for now". PBS News. 2024-07-30. Retrieved 2024-11-14.