Amos Storkey
A major contributor to this article appears to have a close connection with its subject. (September 2024) |
Amos James Storkey | |
---|---|
Born | 14 February 1971 |
Nationality | British |
Alma mater | Trinity College, Cambridge |
Known for | Storkey Learning Rule First Convolutional Network for Learning Go |
Parent(s) | Alan Storkey, Elaine Storkey |
Scientific career | |
Fields | Machine learning, artificial intelligence, computer science |
Institutions | University of Edinburgh |
Amos James Storkey (born 1971) is Professor of Machine Learning and Artificial Intelligence at the School of Informatics, University of Edinburgh.
Storkey studied mathematics at Trinity College, Cambridge and obtained his doctorate from Imperial College, London. In 1997 during his PhD, he worked on the Hopfield Network a form of recurrent artificial neural network popularized by John Hopfield in 1982. Hopfield nets serve as content-addressable ("associative") memory systems with binary threshold nodes and Storkey developed what became known as the "Storkey Learning Rule".[1][2][3][4]
Subsequently, he has worked on approximate Bayesian methods, machine learning in astronomy,[5] graphical models, inference and sampling, and neural networks. Storkey joined the School of Informatics at the University of Edinburgh in 1999, was Microsoft Research Fellow from 2003 to 2004, appointed as reader in 2012, and to a personal chair in 2018. He is currently a Member of Institute for Adaptive and Neural Computation, Director of CDT in Data Science [2014-22] leading the Bayesian and Neural Systems Group.[6] In December 2014, Clark and Storkey together published an innovative paper "Teaching Deep Convolutional Neural Networks to Play Go". Convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. Their paper showed that a Convolutional Neural Network trained by supervised learning from a database of human professional games could outperform GNU Go and win some games against Monte Carlo tree search Fuego 1.1 in a fraction of the time it took Fuego to play.[7][8][9][10][circular reference]
Most cited work
[edit]- Antoniou A, Storkey A, Edwards H. Data augmentation generative adversarial networks. arXiv preprint arXiv:1711.04340. 2017 Nov 12.[1] According to Google Scholar, it has been cited 490 times.[11]
- Burda Y, Edwards H, Storkey A, Klimov O. Exploration by random network distillation. arXiv preprint arXiv:1810.12894. 2018 Oct 30. [2] According to Google Scholar, this paper has been cited 368 times [11]
- Burda Y, Edwards H, Pathak D, Storkey A, Darrell T, Efros AA. Large-scale study of curiosity-driven learning. arXiv preprint arXiv:1808.04355. 2018 Aug 13. [3] According to Google Scholar, this paper has been cited 313 times [11]
- Everingham M, Zisserman A, Williams CK, Van Gool L, Allan M, Bishop CM, Chapelle O, Dalal N, Deselaers T, Dorkó G, Duffner S. The 2005 pascal visual object classes challenge. InMachine Learning Challenges Workshop 2005 Apr 11 (pp. 117–176).[4] Springer, Berlin, Heidelberg. According to Google Scholar, this paper has been cited 306 times [11]
- Toussaint M, Storkey A. Probabilistic inference for solving discrete and continuous state Markov Decision Processes. InProceedings of the 23rd international conference on Machine learning 2006 Jun 25 (pp. 945–952).[5] According to Google Scholar, this paper has been cited 217 times [11]
References
[edit]- ^ Aggarwal, Charu C. "Neural Networks and Deep Learning" p240
- ^ Leveraging Different Learning Rules in Hopfield Nets for Multiclass Classification saiconference.com
- ^ Storkey, Amos. "Increasing the capacity of a Hopfield network without sacrificing functionality." Artificial Neural Networks – ICANN'97 (1997): 451-456.
- ^ Storkey, Amos. "Efficient Covariance Matrix Methods for Bayesian Gaussian Processes and Hopfield Neural Networks". PhD Thesis. University of London. (1999)
- ^ "One giant scrapheap for mankind". BBC News. 15 April 2004.
- ^ "Home". bayeswatch.com.
- ^ arXiv, Emerging Technology from the. "Why Neural Networks Look Set to Thrash the Best Human Go Players for the First Time". MIT Technology Review.
- ^ Chris J Maddison, 'Move Evaluation in Go' Madhttp://www0.cs.ucl.ac.uk/staff/d.silver/web/Applications_files/deepgo.pdf
- ^ Clark, Christopher; Storkey, Amos (2014). "Teaching Deep Convolutional Neural Networks to Play Go". arXiv:1412.3409 [cs.AI].
- ^ Convolutional neural network
- ^ a b c d e https://scholar.google.com/scholar?hl=en&as_sdt=0%2C33&q=Amos+storkey&btnG= Google Scholar Author page, Accessed June 14, 2021
This article needs additional or more specific categories. (July 2021) |