Strong AI

Jump to navigation Jump to search

Template:Portalpar

For the strong AI hypothesis, see philosophy of artificial intelligence

Strong AI is a term used by futurists, science fiction writers and forward looking researchers to describe artificial intelligence that matches or exceeds human intelligence.[1] Strong AI is also referred to as the ability to perform "general intelligent action",[2] or as "artificial general intelligence",[3] "artificial consciousness", "sentience", "sapience", "self-awareness" or "consciousness"[4] (although there are subtle differences in the use of each of these terms).

Some references classify artificial intelligence research into "strong AI, applied AI and cognitive simulation."[5] Applied AI (also called "narrow AI"[1] or "weak AI"[6]) refers to the use of software to study or accomplish specific problem solving or reasoning tasks that do not encompass (or in some cases, are completely outside of) the full range of human cognitive abilities.

History

Origin of the term

The term "strong AI" was adopted from the name of an argument in the philosophy of artificial intelligence first identified by John Searle in 1980.[7] He wanted to distinguish between two different hypotheses about artificial intelligence:[8]

  • An artificial intelligence system can think and have a mind.[9]
  • An artificial intelligence system can (only) act like it thinks and has a mind.

The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage, which is fundamentally different than the subject of this article, is common in academic AI research and textbooks.[10]

The term "strong AI" is now used to describe any artificial intelligence system that acts like it has a mind,[1] regardless of whether a philosopher would be able to determine if it actually has a mind or not. As Russell and Norvig write: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."[11]

AI researchers are interested in a related statement (that some sources confusingly call "the strong AI hypothesis"):[12]

  • An artificial intelligence system can think (or act like it thinks) as well or better than people do.

This assertion, which hinges on the breadth and power of machine intelligence, is the subject of this article.

Strong AI research

File:Hal-9000.jpg
Hal-9000 from Stanley Kubrick's 2001: A Space Odyssey, who accurately represented what early AI researchers thought they would accomplish by 2001

Modern AI research began in the middle 50s.[13] The first generation of AI researchers were convinced that strong AI was possible and that it would exist in just a few decades. As AI pioneer Herbert Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."[14] Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who accurately embodied what AI researchers believed they could create.

However, in the early 70s, became obvious that researchers had grossly underestimated the difficulty of the project. The agencies that funded AI became skeptical of strong AI and put researchers under increasing pressure to produce useful technology, or "applied AI". By 1974, funding for AI projects was hard find.[15]

As the eighties began, Japan's fifth generation computer project revived interest in strong AI, setting out a ten year timeline that included strong AI goals like "carry on a casual conversation".[16] In response to this and the success of expert systems, both industry and government pumped money back into the field.[17] However, the market for AI spectacularly collapsed in the late 80s and the goals of the fifth generation computer project were never fulfilled.[18] For the second time in 20 years, AI researchers who had predicted the immanent arrival of strong AI had been shown to fundamentally mistaken about what they could accomplish.

By the 1990s, AI researchers had gained a reputation for making promises they could not keep. Many AI researchers today are reluctant to make any kind of prediction at all[19] and avoid any mention of "human level" artificial intelligence, for fear of being labeled a "wild-eyed dreamer."[20] For the most part, researchers today choose to focus on specific sub-problems where they can produce verifiable results and commercial applications, such as neural nets, computer vision or data mining,[21] and most believe that these sub-problems must be solved before machines with strong AI can exist.[22] Interest in direct research into strong AI tends to come from outside the field, from internet entrepreneurs (such as Jeff Hawkins) or from futurists such as Ray Kurzweil.

Defining strong AI

Template:Expand-section Template:Expert

A computer enters the framework of strong AI if a machine approaches or supersedes human intelligence, if it can do typically human tasks, if it can apply a wide range of background knowledge and has some degree of self-consciousness. John McCarthy stated in his work What is AI? that we still do not have a solid definition of intelligence. Human-bound definitions of measurable intelligence, like IQ, cannot easily be applied to machine intelligence.

The most famous definition of AI was the operational one proposed by Alan Turing in his "Turing test" proposal. There have been very few attempts to create such definition since (some of them are in the AI Project)

A proposal to define a more easily quantifiable measure of artificial intelligence is:

Intelligence is the possession of a model of reality and the ability to use this model to conceive and plan actions and to predict their outcomes. The higher the complexity and precision of the model, the plans, and the predictions, and the less time needed, the higher is the intelligence.[1]

Research approaches

Artificial general intelligence

Artificial General Intelligence research aims to create AI that can replicate human-level intelligence completely, often called an Artificial General Intelligence (AGI) to distinguish from less ambitious AI projects. As yet, researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. Some small groups of computer scientists are doing AGI research, however. Organizations pursuing AGI include the Adaptive AI, Artificial General Intelligence Research Institute (AGIRI) and the Singularity Institute for Artificial Intelligence. One recent addition is Numenta, a project based on the theories of Jeff Hawkins, the creator of the Palm Pilot. While Numenta takes a computational approach to general intelligence, Hawkins is also the founder of the RedWood Neuroscience Institute, which explores conscious thought from a biological perspective.

Simulated human brain model

This is seen by manyTemplate:Who as the quickest means of achieving Strong AI, as it doesn't require complete understanding of how intelligence works. Basically, a very powerful computer would simulate a human brain, often in the form of a network of neurons. For example, given a map of all (or most) of the neurons in a functional human brain, and a good understanding of how a single neuron works, it would be possible for a computer program to simulate the working brain over time. Given some method of communication, this simulated brain might then be shown to be fully intelligent. The exact form of the simulation varies: instead of neurons, a simulation might use groups of neurons, or alternatively, individual molecules might be simulated. It's also unclear which portions of the human brain would need to be modeled: humans can still function while missing portions of their brains, and areas of the brain are associated with activities (such as breathing) that might not be necessary to think.[citation needed]

This approach would require three things:

File:RIKEN MDGRAPE-3.jpg
The RIKEN MDGRAPE-3 supercomputer
  • Hardware. An extremely powerful computer would be required for such a model. Futurist Ray Kurzweil estimates 10 million MIPS, or ten petaflops. At least one special-purpose petaflops computer has already been built (the Riken MDGRAPE-3) and there are nine current computing projects (such as BlueGene/P) to build more general purpose petaflops computers all of which should be completed by 2008, if not sooner.[2] Most other attempted estimates of the brain's computational power equivalent have been rather higher, ranging from 100 million MIPS to 100 billion MIPS. Furthermore, the overhead introduced by the modeling of the biological details of neural behaviour might require a simulator to have access to computational power much greater than that of the brain itself.
  • Software. Software to simulate the function of a brain would be required. This assumes that the human mind is the central nervous system and is governed by physical laws. Constructing the simulation would require a great deal of knowledge about the physical and functional operation of the human brain, and might require detailed information about a particular human brain's structure. Information would be required both of the function of different types of neurons, and of how they are connected. Note that the particular form of the software dictates the hardware necessary to run it. For example, an extremely detailed simulation including molecules or small groups of molecules would require enormously more processing power than a simulation that models neurons using a simple equation, and a more accurate model of a neuron would be expected to be much more expensive computationally than a simple model. The more neurons in the simulation, the more processing power it would require.
  • Understanding. Finally, it requires sufficient understanding thereof to be able to model it mathematically. This could be done either by understanding the central nervous system, or by mapping and copying it. Neuroimaging technologies are improving rapidly, and Kurzweil predicts that a map of sufficient quality will become available on a similar timescale to the required computing power. However, the simulation would also have to capture the detailed cellular behaviour of neurons and glial cells, presently only understood in the broadest of outlines.

Once such a model is built, it will be easily altered and thus open to trial and error experimentation. This is likely to lead to huge advances in understanding, allowing the model's intelligence to be improved/motivations altered.[dubious ]

The Blue Brain project aims to use one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to simulate a single neocortical column consisting of approximately 60,000 neurons and 5km of interconnecting synapses. The eventual goal of the project is to use supercomputers to simulate an entire brain.

The brain gets its power from performing many parallel operations, a standard computer from performing operations very quickly.

The human brain has roughly 100 billion neurons operating simultaneously, connected by roughly 100 trillion synapses[23]. By comparison, a modern computer microprocessor uses only 1.7 billion transistors[3]. Although estimates of the brain's processing power put it at around 1014 neuron updates per second,[24] it is expected that the first unoptimized simulations of a human brain will require a computer capable of 1018 FLOPS. By comparison a general purpose CPU (circa 2006) operates at a few GFLOPS (109 FLOPS). (each FLOP may require as many as 20,000 logic operations).

However, a neuron is estimated to spike 200 times per second (this giving an upper limit on the number of operations).[citation needed] Signals between them are transmitted at a maximum speed of 150 meters per second. A modern 2GHz processor operates at 2 billion cycles per second, or 10,000,000 times faster than a human neuron, and signals in electronic computers travel at roughly half the speed of light; faster than signals in human by a factor of 1,000,000.[citation needed] The brain consumes about 20W of power where supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5x1020 op/sec/watt at room temperature)

Neuro-silicon interfaces have also been proposed [4] [5].

Critics of this approachTemplate:Who believe it's possible to achieve AI directly without imitating nature and often use the analogy that early attempts to construct flying machines modeled them after birds, but modern aircraft do not look like birds.[citation needed]

Artificial consciousness research

Template:Expand

Emergence

SomeTemplate:Who have suggested that intelligence can arise as an emergent quality from the convergence of random, man-made technologies. Human sentience—or any other biological and naturally occurring intelligence—arises out of the natural process of species evolution and an individual's experiences. Discussion of this eventuality is currently limited to fiction and theory.[citation needed]OR

See also

External links

Notes

  1. 1.0 1.1 1.2 Template:Harv or see Advanced Human Intelligence
  2. Newell & Simon 1963. This the term they use for "human-level" intelligence in the physical symbol system hypothesis.
  3. Voss 2006
  4. These terms are not used here in their standard definitions, as understood by psychology, neuroscience or cognitive science, but as place-markers for a term that describes the essential property of human intelligence required by strong AI.
  5. Encyclopedia Britannica Strong AI, applied AI, and cognitive simulation or Jack Copeland What is artificial intelligence? on AlanTuring.net
  6. The Open Unversity on Strong and Weak AI
  7. Searle 1980
  8. As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." Template:Harv
  9. The word "mind" is has a specific meaning for philosophers, as used in the mind body problem or the philosophy of mind
  10. Among the many sources that use the term in this way are: Russell & Norvig 2003, Oxford University Press Dictionary of Psychology (quoted in "High Beam Encyclopedia"), MIT Encyclopedia of Cognitive Science (quoted in "AITopics"), Planet Math, Arguments against Strong AI (Raymond J. Mooney, University of Texas), Artificial Intelligence (Rob Kremer, University of Calgary), Minds, Math, and Machines: Penrose's thesis on consciousness (Rob Craigen, University of Manitoba), The Science and Philosophy of Consciousness Alex Green, Philosophy & AI Bernard, Will Biological Computers Enable Artificially Intelligent Machines to Become Persons? Anthony Tongen, and the Usenet FAQ on Strong AI
  11. Russell & Norvig, p. 947
  12. A few sources where "strong AI hypothesis" is used this way: Strong AI Thesis, Neuroscience and the Soul
  13. Crevier 1993, p. 48-50
  14. Simon 1965, p. 96 quoted in Crevier 1993, p. 109
  15. The Lighthill report specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. Template:Harv Template:Harv In the U.S., DARPA became determined to fund only "mission-oriented direct research, rather than basic undirected research". See Template:Harv under "Shift to Applied Research Increases Investment". See also Template:Harv and Template:Harv
  16. Crevier 1993, pp. 211, Russell & Norvig 2003, p. 24 and see also Feigenbaum & McCorduck 1983
  17. Crevier 1993, pp. 161-162,197-203,240, Russell & Norvig 2003, p. 25, NRC 1999 under "Shift to Applied Research Increases Investment"
  18. Crevier 1993, pp. 209-212
  19. As AI founder John McCarthy wrote in his Reply to Lighthill, "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case."
  20. "At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers." Markoff, John (2005-10-14). "Behind Artificial Intelligence, a Squadron of Bright Real People". The New York Times. Retrieved 2007-07-30.
  21. Russell & Norvig 2003, pp. 25-26
  22. Hans Moravec wrote in 1988 "I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts." Template:Harv
  23. "nervous system, human." Encyclopædia Britannica. 9 Jan. 2007
  24. Russell & Norvig 2003

References

it:Intelligenza artificiale forte sv:Strong AI