Seas of ones and zeroes
While the innards of artificial intelligence may seem inhuman to us sometimes, as it emerges from a sea of bits of ones and zeros, its architecture is often biologic-like. For example, as Melanie Mitchell notes in her book Artificial Intelligence, methods for reinforcement learning are inspired in part by operant conditioning in psychology. Much like humans and animals, machines can learn through reward and punishment too. Researchers at DeepMind used these techniques to train their programs to learn and play arcade games like Pong, Space Invaders, and Breakout on the Atari console.
Similarly, the convolutional neural networks (ConvNet) that researchers deploy for image recognition are inspired by developments in neuroscience. As Mitchell explains, “like neurons in the visual cortex, the [‘simulated neuron’] units in a ConvNet act as detectors for important visual features”, like colors or edges, inside their receptive field. Activations in these units are then weighted and summed and fed into subsequent layers for further processing. “As [we] go up the hierarchy, the detectors become sensitive to increasingly more complex features” of the visual input. Upon reaching the final “fully-connected layer”, the network classifies the input image and specifies its confidence in its assessment.
Ostrich school buses
Yet despite these architectural similarities, the behavior of artificial intelligence is also rather inhuman. The convolutional networks for image recognition, for example, are susceptible to “adversarial examples”. Subtle manipulations to the input images, while imperceptible to the human eye, can fool the algorithm. In a humorous example, the ConvNet called AlexNet began to mistake school buses for ostriches after researchers made marginal distortions to the input image. Humans, on the other hand, are less prone to such visual errors. We wouldn’t otherwise be around as a species if that was not the case. (Although we do have our own slew of quirks, like our susceptibility to optical illusions.)
Moreover, these neural networks require a lot of supervision and data. While they may be superhuman in many respects, they are specialized and inflexible. As Mitchell notes, unlike children and curious adults, these machines do not ask questions, seek information, draw connections, or explore flexibly. They do not think about their thinking, or understand what they do. The neural network that learns to play Chess or recognize images cannot learn to do much else despite all the knowledge and training it possesses. “No one has yet come up with the kinds of algorithms needed to perform successful unsupervised learning,” Mitchell writes.
Transfer learning
Humans, by contrast, are much better at “transfer learning”. While imperfect, the skills and knowledge that we develop in some job, sport, or subject—whether it be in decision-making, communication, and so on—tend to transfer well into neighboring domains. As Mitchell observes, “for humans, a crucial part of intelligence is… being able to learn to think and to then apply our thinking flexibly.” This is similar to William Calvin’s view in How Brains Think. To him, intelligence involves “guessing well” when the situation is novel and unclear. Right now, successful reinforcement learning algorithms tend to perform well only when the rules, states, rewards, information and options are clear, as in a game of Chess or Go. Unfortunately, “the real world doesn’t come so cleanly delineated”, Mitchell adds.
The ambiguity of language
Consider, for instance, the nebulousness of language. How might we construct a program to read and respond to written statements? We will quickly find, Mitchell notes, that language is “inherently ambiguous”, context-dependent, and laden with assumed knowledge. Capturing all of this in a large set of grammatical, linguistic, contextual, and cultural rules for some machine to run is not an easy task. The word “charm”, for example, can be a noun or a verb with different contextual meanings. It is even an adjective in physics to denote a particular type of quark. This explains why early natural language processing algorithms that relied on “symbolic rule-based approaches” did not fare well. They could not incorporate all the nuances, subtleties, and exceptions.
Winograd schemas
It is in part for this reason that statistical approaches have been more successful in natural language processing. Rather than specify every rule, these approaches infer the result by studying the correlations between words, phrases, sentences, and so on, using enormous datasets. Mitchell laments, however, that more data and statistical crunching alone may not be enough to achieve human-like language abilities. To see why, Mitchell points to various examples from the Winograd Schema, which contain questions and challenges that are “easy for humans [to answer] but tricky for computers [to solve].”
Consider, for example, the following statements:
(1) “The city council refused the demonstrators a permit because they feared violence.”
(2) “The city council refused the demonstrators a permit because they advocated violence.”
In these two statements, who does “they” refer to? While both sentences differ by only one word (“feared” and “advocated”), the difference is large enough to change the reference point. As Mitchell explains, “we rely on our background knowledge about how society works” to make sense of a somewhat ambiguous statement. In addition to statistical approaches, contextual knowledge and understanding appear necessary.
Long tails and Asimov’s robots
The subtlety of language is an example of the long-tail problem in artificial intelligence. Take self-driving cars, for example. As Mitchell notes, it is impossible to train and prepare a self-driving algorithm for every conceivable permutation. Controlled environments cannot capture the open-ended possibilities of real life. When millions of self-driving cars are on the road, strange and bewildering scenarios are bound to arise by sheer probability. For the system to succeed, it must be clever and flexible enough to confront unexpected situations.
The challenges are reminiscent of Isaac Asimov’s insight into robotics and ethics in the 1940s. In particular, Asimov showed through science fiction how interactions between seemingly sensible rules can run into ambiguities, absurdities and unintended consequences. For instance, the first of Asimov’s “rules of robotics”—that “a robot may not injure a human being, or, through inaction, allow a human being to come to harm”—is already fraught. It is not difficult to conceive of spine-chilling scenarios in which action or inaction results in harm to someone somewhere.
Understanding and embodiment
In Mitchell’s view, “the ultimate problem [for artificial intelligence] is one of understanding.” They do not yet possess the “common-sense knowledge” that children and adults develop through their embeddedness in family, society, and nature. While artificial systems can develop representations of particular problems, they cannot yet abstract and analogize in the way we do.
As Linda Smith and Michael Gasser argue in their embodiment hypothesis, “intelligence emerges in the interaction of an agent with an environment and as a result of sensorimotor activity… Starting as a baby grounded in a physical, social, and linguistic world is crucial to the development of the flexible and inventive intelligence that characterizes humankind.” So even if machines learned to understand and communicate like we do, they may appear altogether strange and alien given the differences in our lived experiences.
Distant futures
For reasons like these, Mitchell believes that the future of general human-like artificial intelligence is far off. New research is necessary to understand and develop the sort of common knowledge that machines might need to make sense of their world. Even in living minds, “neuroscientists have very little understanding of how such mental models… emerge from the activities of billions of connected neurons”, writes Mitchell. Much about the brain and artificial intelligence certainly remains to be discovered.
She notes, of course, that predictions like these are often disproven by progress. In 1943, IBM’s chairman Thomas Watson predicted that “there is a world market for maybe five computers.” Three decades later, Digital Equipment Corporation’s cofounder Ken Olsen proclaimed that “there’s no reason for individuals to have a computer in their home.” Even the cognitive scientist Douglas Hofstadter predicted in 1979 that dedicated programs would be unable to surpass elite players in Chess.
Much of this suggests that the possibilities are vast, and that our general conception of AI is likely to change. When Deep Blue defeated then World Chess Champion Garry Kasparov in 1997, the benchmark for artificial intelligence simply jumped to a higher bar. It also seems to be the case that the more we learn about AI, the more we come to understand about ourselves. In this way, the future of AI might rest somewhat beyond our current imagination and understanding. Perhaps the endpoint will be biologically inspired but something altogether different.
Sources and further reading
- Mitchell, Melanie. (2019). Artificial Intelligence.
- Calvin, William. (1996). How Brains Think.
- Ferris, Timothy. (1992). The Mind’s Sky.
- Garry, Kasparov. (2017). Deep Thinking.
- Surowiecki, James. (2004). The Wisdom of Crowds.
Latest posts
- What’s Eating the Universe? Paul Davies on Cosmic Eggs and Blundering Atoms
- The Dragons of Eden — Carl Sagan on Limbic Doctrines and Our Bargain with Nature
- The Unexpected Universe — Loren Eiseley on Star Throwers and Incidental Triumphs
- Bridges to Infinity — Michael Guillen on the Boundlessness of Life and Discovery
- Ways of Being — James Bridle on Looking Beyond Human Intelligence
- Predicting the Unpredictable — W J Firth on Chaos and Coexistence
- Order Out of Chaos — Prigogine and Stenger on Our Dialogue With Complexity
- How Brains Think — William Calvin on Intelligence and Darwinian Machines
- The Lives of a Cell — Lewis Thomas on Embedded Nature
- Infinite in All Directions — Freeman Dyson on Maximum Diversity