Jump to content

User:Fluffle-Prime/Quantum neural network/CtFrck Peer Review

From Wikipedia, the free encyclopedia
Sample model of a feed forward neural network

Quantum neural networks can be take the form of classical data on a quantum computer, quantum data on a classical computer, and quantum data on a quantum computer. Most QNN are developed as feed-forward networks. Similar to their classical counterparts, this structure in takes input from one layer of qubits, through an arbitrary layer of qubits, and eventually to a final layer of output qubits. This structure is trained on which path to take similar to classical artificial neural networks.

Comments

I looked at the introduction of the original article and though it was very wordy/ confusing. I think it would be good to revise the introduction to make it more clear and informative.


Quantum neural networks (QNNs) are computational neural network models which are based on the principles of quantum mechanics.

The first ideas on quantum neural computation were published independently in 1995 by Subhash Kak and Ron Chrisley, engaging with the theory of quantum mind, which posits that quantum effects play a role in cognitive function.


However, typical research in QNNs involves combining classical artificial neural network models (which are widely used in machine learning for the important task of pattern recognition) with the advantages of quantum information in order to develop more efficient algorithms. One important motivation for these investigations is the difficulty to train classical neural networks, especially in big data applications. The hope is that features of quantum computing such as quantum parallelism or the effects of interference and entanglement can be used as resources. Since the technological implementation of a quantum computer is still in a premature stage, such quantum neural network models are mostly theoretical proposals that await their full implementation in physical experiments.

Training[edit]

[edit]

Quantum Neural Networks can be theoretically trained similarly to training classical/artificial neural networks. A key difference lies in communication between the layers of a neural networks. For classical neural networks, at the end of a given operation, the current perceptron copies its output to the next layer of perceptron(s) in the network. However in a quantum neural network, where each perceptron is a qubit, this would violate the no-cloning theorem. A proposed generalized solution to this is to replace the classical fan-out method with an arbitrary unitary that spreads out, but does not copy, the output of one qubit to the next layer of qubits. Using this fan-out Unitary (Uf) with a dummy state qubit in a known state (Ex. |0> in the computational basis), also known as an Ancilla bit, the information from the qubit can be transferred to the next layer of qubits. This also makes the process reversible, which is necessary for quantum.

Comment

This is really good! I thought it was really clear and had a lot of information.


Things to continue to add to training section

  • Proposals for training algorithms
  • cost functions


Comment

This might be what you were saying to add below but you could consider adding equations to the sections with examples.

https://www.nature.com/articles/s41467-020-14454-2

This article defines a QNN using equations as well as gives the training algorithm and more. It is applied to deep QNN so I don't know if that changes its relevancy but I thought I'd add it in case it is helpful.


Things to add to example section.

  • Recent developments or applications of QNN
  • The different forms of QNN that make it quantum
    • Quantum data on a classical network/device
    • Classical data on a quantum network/device
      • Source  talks a little bit about this
    • Quantum Data on a Quantum device
      • Source talks about this mainly
  • Potentially more less recent sources
  1. ^ Jump up to:a b c
  2. ^ Jump up to:a b c d
  3. ^
  4. ^ Feynman, Richard P. (1986-06-01). "Quantum mechanical computers". Foundations of Physics. 16 (6): 507–531. doi:10.1007/BF01886518. ISSN 1572-9516.