Jump to content

David Wolpert

From Wikipedia, the free encyclopedia
David H. Wolpert
NationalityAmerican
Alma materPrinceton University
University of California, Santa Barbara
Scientific career
FieldsMathematics
Computer science
InstitutionsSanta Fe Institute
Doctoral advisorAnthony Zee

David Hilton Wolpert is an American physicist and computer scientist. He is a professor at Santa Fe Institute. He is the author of three books, three patents, over one hundred refereed papers, and has received two awards. His name is particularly associated with a theorem in computer science known as "no free lunch".

Career

[edit]

David Wolpert took a B.A. in physics at Princeton University (1984), then attended the University of California, Santa Barbara, where he took the degrees of M.A. (1987) and Ph.D. (1989).

Between 1989 and 1997 he pursued a research career at Los Alamos National Laboratory, IBM, TXN Inc. and Santa Fe Institute.

From 1997 to 2011 he worked as senior computer scientist at NASA Ames Research Center, and became visiting scholar at the Max Planck Institute. He spent the year 2010-11 as Ulam Scholar at the Center for Nonlinear Studies at Los Alamos.[1]

He joined the faculty of Santa Fe Institute in 2011 and became a professor there in September 2013.[2] His research interests have included statistics, game theory, machine learning applications, information theory, optimization methods and complex systems theory.

"No free lunch"

[edit]

One of Wolpert's most discussed achievements is known as No free lunch in search and optimization.[3][4][5][6] By this theorem, all algorithms for search and optimization perform equally well averaged over all problems in the class with which they are designed to deal. However, in a machine learning context, the theorem makes an implicit artificial assumption regarding the lack of overlap between training and test data that is rarely true in practice.[7] More generally, the theorem holds only under certain conditions that are not often encountered precisely in real life,[8][9][10] although it has been claimed that the conditions can be met approximately.[11] The theorem lies within the domain of computer science, but a weaker version known as the “folkloric no free lunch theorem” has been drawn upon by William A. Dembski in support of intelligent design.[12] This use of the theorem has been rejected by Wolpert himself[13] and others.[14][15]

Limitation on knowledge

[edit]

Wolpert has formalized an argument to show that it is in principle impossible for any intellect to know everything about the universe of which it forms a part, in other words disproving "Laplace's demon".[16] This has been seen as an extension of the limitative theorems of the twentieth century such as those of Heisenberg and Gödel.[17] In 2018 Wolpert published a proof revealing the fundamental limits of scientific knowledge.[18]

Machine learning

[edit]

Wolpert made many contributions to the early work on machine learning. These include a Bayesian estimator of the entropy of a distribution based on samples of the distribution,[19][20] disproving formal claims that the "evidence procedure" is equivalent to hierarchical Bayes,[21] a Bayesian alternative to the chi-squared test,[22] a proof that there is no prior for which the bootstrap procedure is Bayes-optimal,[23] and Bayesian extensions of the bias-plus-variance decomposition.[24] Most prominently, he introduced "stacked generalization",[25] a more sophisticated version of cross-validation that uses held-in / held-out partitions of a data set to combine learning algorithms rather than just choose one of them. This work was developed further by Breiman, Smyth, Clarke and many others, and in particular the top two winners of 2009 Netflix competition made use of stacked generalization (rebranded as "blending").[26]

Academic memberships

[edit]
  • Fellow of IEEE
  • Member of FQXi
  • Research Associate of Infometrics Institute, American University
  • Associate Editor (as of January 2017)
  • Member of editorial board (as of January 2017)
  • Member on one NSF panel

Awards

[edit]
  • Princeton University Physics Department Kusaka Prize
  • Best Paper Award for IEEE Transactions on Evolutionary Computation, Vols .1 & 2
  • Superior Accomplishment Award for NASA Code IC for 1999

Publications (books only)

[edit]
  • Wolpert, D.H., Kempes, C., Stadler, P. F., Grochow, J.A. (eds.), The Energetics of Computing in Life and Machines, Santa Fe Institute Press, 2019. ISBN 1947864076
  • Wolpert, D.H. (ed.), The Mathematics of Generalization, Addison-Wesley, 1994. ISBN 0201409852
  • Wolpert, D.H. An Incompleteness Theorem for Calculating the Future, SFI Economics Program, Santa Fe Institute, 1996.
  • Tumer, K. and Wolpert, D.H. (ed.), Collectives And The Design Of Complex Systems, Springer, 2004. ISBN 0387401652
  • Guy, T.V., Karny M., Wolpert D.H. (eds.), Decision making with imperfect decision makers, Springer, 2012. ISBN 3642246478
  • Wolpert, D.H. Theory of Collective Intelligence, NASA Technical Reports Server, 2003. ISBN 1289283427

References

[edit]
  1. ^ "CNLS Ulam Scholar". Archived from the original on 2014-10-26. Retrieved 2014-09-22.
  2. ^ David Wolpert, Santa Fe Institute
  3. ^ Wolpert, D.H., Macready, W.G. (1995), No Free Lunch Theorems for Search, Technical Report SFI-TR-95-02-010 (Santa Fe Institute).
  4. ^ Wolpert D.H., Macready W.G. (1997). "No Free Lunch Theorems for Optimization" (PDF). IEEE Transactions on Evolutionary Computation. 1: 67. CiteSeerX 10.1.1.138.6606. doi:10.1109/4235.585893. S2CID 5553697.
  5. ^ Wolpert, David (1996), The Lack of A Priori Distinctions between Learning Algorithms, Neural Computation, pp. 1341–1390.
  6. ^ David H. Wolpert, What the No Free Lunch Theorems Really Mean; How to Improve Search Algorithms, SFI Working Paper 2012-10-017, Santa Fe Institute 2012
  7. ^ Baxter, Jonathan (1999). "Some Observations Concerning Off-Training-Set (OTS) Error". arXiv:1912.05915.
  8. ^ Streeter, M. (2003) Two Broad Classes of Functions for Which a No Free Lunch Result Does Not Hold, Genetic and Evolutionary Computation – GECCO 2003, pp. 1418–1430.
  9. ^ Igel C., Toussaint M. (2004). "A No-Free-Lunch Theorem for Non-Uniform Distributions of Target Functions". Journal of Mathematical Modelling and Algorithms. 3 (4): 313–322. CiteSeerX 10.1.1.71.9744. doi:10.1023/b:jmma.0000049381.24625.f7. S2CID 195292166.
  10. ^ English, T. (2004), No More Lunch: Analysis of Sequential Search Archived 2015-05-01 at the Wayback Machine, Proceedings of the 2004 IEEE Congress on Evolutionary Computation, pp. 227–234.
  11. ^ Droste S., Jansen T., Wegener I. (2002). "Optimization with randomized search heuristics: the (A)NFL theorem, realistic scenarios, and difficult functions". Theoretical Computer Science. 287 (1): 131–144. doi:10.1016/s0304-3975(02)00094-4. hdl:2003/5394.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  12. ^ Dembski, W. A. (2002) No Free Lunch, Rowman & Littlefield, ISBN 0-7425-1297-5
  13. ^ Wolpert, D. (2003), William Dembski's treatment of the No Free Lunch theorems is written in jello, Talk Reason
  14. ^ Perakh, M. (2003), The No Free Lunch Theorems and Their Application to Evolutionary Algorithms, Talk Reason.
  15. ^ Richard Wein (2002), Not a Free Lunch But a Box of Chocolates (Sect. 5.3), The TalkOrigins Archive
  16. ^ David H. Wolpert (2008). "Physical limits of inference". Physica D. 237 (9): 1257–1281. arXiv:0708.1362. Bibcode:2008PhyD..237.1257W. doi:10.1016/j.physd.2008.03.040. S2CID 2033616. full text
  17. ^ Graham P. Collins, Within Any Possible Universe, No Intellect Can Ever Know It All, Scientific American, 16 February 2009
  18. ^ "New proof reveals fundamental limits of scientific knowledge". Retrieved 2018-10-04.
  19. ^ David H. Wolpert and David Wolf (1995). "Estimating Functions of Probability Distributions from a Finite Set of Samples". Physical Review E. 52 (6): 6841–6854. Bibcode:1995PhRvE..52.6841W. CiteSeerX 10.1.1.55.7122. doi:10.1103/physreve.52.6841. PMID 9964199. S2CID 9795679.
  20. ^ David H. Wolpert and Simon DeDeo (2013). "Estimating Functions of Distributions Defined over Spaces of Unknown Size". Entropy. 15 (12): 4668–4699. arXiv:1311.4548. Bibcode:2013Entrp..15.4668W. doi:10.3390/e15114668. S2CID 2737117.
  21. ^ David H. Wolpert and Charles E. Strauss (1996). "What Bayes has to say about the evidence procedure". Maximum Entropy and Bayesian Methods 1993.
  22. ^ David H. Wolpert (1996). "Determining Whether Two Data Sets are from the Same Distribution". Maximum Entropy and Bayesian Methods 1995.
  23. ^ David H. Wolpert (1996). "The Bootstrap is Inconsistent with Probability Theory". Maximum Entropy and Bayesian Methods 1995.
  24. ^ David H. Wolpert (1997). "On Bias plus Variance". Neural Computation. 9 (6): 1211–1243. doi:10.1162/neco.1997.9.6.1211. S2CID 15418441.
  25. ^ David H. Wolpert (1992). "Stacked Generalization". Neural Networks. 5 (2): 241–259. CiteSeerX 10.1.1.133.8090. doi:10.1016/s0893-6080(05)80023-1.
  26. ^ Joseph Sill; et al. (2008). "Feature-Weighted Linear Stacking". Physica D: Nonlinear Phenomena. 237 (9): 1257–1281. arXiv:0708.1362. Bibcode:2008PhyD..237.1257W. doi:10.1016/j.physd.2008.03.040. S2CID 2033616.
[edit]