Jump to content

Open Neural Network Exchange

From Wikipedia, the free encyclopedia
(Redirected from Onnx)
Open Neural Network Exchange (ONNX)
Original author(s)Facebook, Microsoft
Developer(s)Linux Foundation
Initial releaseSeptember 2017; 7 years ago (2017-09)
Stable release
1.17.0[1] / 1 October 2024; 42 days ago (1 October 2024)
Repository
Written inC++, Python
Operating systemWindows, Linux
TypeArtificial intelligence ecosystem
Licenseinitially MIT License;
later changed to Apache License 2.0
Websiteonnx.ai Edit this on Wikidata

The Open Neural Network Exchange (ONNX) [ˈɒnɪks][2] is an open-source artificial intelligence ecosystem[3] of technology companies and research organizations that establish open standards for representing machine learning algorithms and software tools to promote innovation and collaboration in the AI sector. ONNX is available on GitHub.

History

[edit]

ONNX was originally named Toffee[4] and was developed by the PyTorch team at Facebook.[5] In September 2017 it was renamed to ONNX and announced by Facebook and Microsoft.[6] Later, IBM, Huawei, Intel, AMD, Arm and Qualcomm announced support for the initiative.[3]

In October 2017, Microsoft announced that it would add its Cognitive Toolkit and Project Brainwave platform to the initiative.[3]

In November 2019 ONNX was accepted as graduate project in Linux Foundation AI.[7]

In October 2020 Zetane Systems became a member of the ONNX ecosystem.[8]

Intent

[edit]

The initiative targets:

Framework interoperability

[edit]

Allow developers to more easily move between frameworks, some of which may be more desirable for specific phases of the development process, such as fast training, network architecture flexibility or inferencing on mobile devices.[6]

Shared optimization

[edit]

Allow hardware vendors and others to improve the performance of artificial neural networks of multiple frameworks at once by targeting the ONNX representation.[6]

Contents

[edit]

ONNX provides definitions of an extensible computation graph model, built-in operators and standard data types, focused on inferencing (evaluation).[6]

Each computation dataflow graph is a list of nodes that form an acyclic graph. Nodes have inputs and outputs. Each node is a call to an operator. Metadata documents the graph. Built-in operators are to be available on each ONNX-supporting framework.[6]

See also

[edit]

References

[edit]
  1. ^ "Release 1.17.0". 1 October 2024. Retrieved 22 October 2024.
  2. ^ @onnxai (March 13, 2018). "Hi Hu, it's not spelled the same, but pronounced the same as Onyx" (Tweet) – via Twitter.
  3. ^ a b c "Microsoft and Facebook's open AI ecosystem gains more support". Engadget. Retrieved 2017-10-11.
  4. ^ "Codemod Toffee -> ONNX, toffee -> onnx. Change file names to match · pytorch/pytorch@6d8d5ba". GitHub. Retrieved 2021-10-12.
  5. ^ "A model exporter for PyTorch by ezyang · Pull Request #2565 · pytorch/pytorch". GitHub. Retrieved 2021-10-12.
  6. ^ a b c d e "Microsoft and Facebook create open ecosystem for AI model interoperability – Microsoft Cognitive Toolkit". Microsoft Cognitive Toolkit. 2017-09-07. Retrieved 2017-10-11.
  7. ^ "LF AI & Data Day – ONNX Community Meetup – Silicon Valley". LF Online Community.
  8. ^ "Zetane Systems Joins the ONNX Community to Accelerate Open-Source Innovation and Universal…". 14 October 2020.
[edit]