Perceptrons (book)

Perceptrons: an introduction to computational geometry
AuthorMarvin Minsky, Seymour Papert
Publication date
1969
ISBN0 262 13043 2

Perceptrons: an introduction to computational geometry is a book written by Marvin Minsky and Seymour Papert and published in 1969. An edition with handwritten corrections and additions was released in the early 1970s. An expanded edition was further published in 1988 (ISBN 9780262631112) after the revival of neural networks, containing a chapter dedicated to counter the criticisms made of it in the 1980s.

The main subject of the book is the perceptron, a type of artificial neural network developed in the late 1950s and early 1960s. The book was dedicated to psychologist Frank Rosenblatt, who in 1957 had published the first model of a "Perceptron".[1] Rosenblatt and Minsky knew each other since adolescence, having studied with a one-year difference at the Bronx High School of Science.[2] They became at one point central figures of a debate inside the AI research community, and are known to have promoted loud discussions in conferences, yet remained friendly.[3]

This book is the center of a long-standing controversy in the study of artificial intelligence. It is claimed that pessimistic predictions made by the authors were responsible for a change in the direction of research in AI, concentrating efforts on so-called "symbolic" systems, a line of research that petered out and contributed to the so-called AI winter of the 1980s, when AI's promise was not realized.[4]

The crux of Perceptrons is a number of mathematical proofs which acknowledge some of the perceptrons' strengths while also showing major limitations.[3] The most important one is related to the computation of some predicates, such as the XOR function, and also the important connectedness predicate. The problem of connectedness is illustrated at the awkwardly colored cover of the book, intended to show how humans themselves have difficulties in computing this predicate.[5] One reviewer, Earl Hunt, noted that the XOR function is difficult for humans to acquire as well during concept learning experiments.[6]

  1. ^ Rosenblatt, Frank (January 1957). The Perceptron: A Perceiving and Recognizing Automaton (Project PARA) (PDF) (Report). Cornell Aeronautical Laboratory, Inc. Report No. 85–460–1. Retrieved 29 December 2019. Memorialized at Joe Pater, Brain Wars: How does the mind work? And why is that so important?, UmassAmherst.
  2. ^ Crevier 1993
  3. ^ a b Olazaran 1996.
  4. ^ Mitchell, Melanie (October 2019). Artificial Intelligence: A Guide for Thinking Humans. Farrar, Straus and Giroux. ISBN 978-0-374-25783-5.
  5. ^ Minsky-Papert 1972:74 shows the figures in black and white. The cover of the 1972 paperback edition has them printed purple on a red background, and this makes the connectivity even more difficult to discern without the use of a finger or other means to follow the patterns mechanically. This problem is discussed in detail on pp.136ff and indeed involves tracing the boundary.
  6. ^ Hunt, Earl (1971). "Review of Perceptrons". The American Journal of Psychology. 84 (3): 445–447. doi:10.2307/1420478. ISSN 0002-9556. JSTOR 1420478.

© MMXXIII Rich X Search. We shall prevail. All rights reserved. Rich X Search