There's an interesting blog post in Wired called Artificial Intelligence Could Be on Brink of Passing Turing Test, which argues that some computer scientists think that we might soon be able to build an intelligent computer capable of passing the Turing Test. The key breakthrough they believe is that thanks to the Internet, the cloud, and massive distributed processing power it might be possible for software to catalog, analyze, correlate, and cross-link everything in the digital realm. "These data and the capacity to analyze them appropriately could allow a machine to answer heretofore computer-unanswerable questions and even pass a Turing test." [source Wired]
In the past AI has proposed three ways that computers might become intelligent:
- we engineer it from the ground up, as in the Cyc Project, but this approach has been largely discredited,
- we build a computer that can learn, as exemplified by the HAL 9000, in 2001: A Space Odyssey, remembering its first lessons, or
- computers with a very large number of interconnections, like neurons, suddenly just become conscious, like the computer Mike in Robert A. Heinlein's classic science-fiction book The Moon Is a Harsh Mistress.
Perhaps it will be combination of all three: engineering, learning and serendipity. But then once conscious would it be ethical to switch the machine off?