At the heart of the discipline of artificial intelligence is the idea that one day weâ€™ll be able to build a machine thatâ€™s as smart as a human. Such a system is often referred to as an artificial general intelligence, or AGI, which is a name that distinguishes the concept from the broader field of study. It also makes it clear that true AI possesses intelligence that is both broad and adaptable. To date, weâ€™ve built countless systems that are superhuman at specific tasks, but none that can match a rat when it comes to general brain power.
But despite the centrality of this idea to the field of AI, thereâ€™s little agreement among researchers as to when this feat might actually be achievable.
"Researchers guess by 2099, thereâ€™s a 50 percent chance weâ€™ll have built AGI"
In a new book published this week titled Architects of Intelligence, writer and futurist Martin Ford interviewed 23 of the most prominent men and women who are working in AI today, including DeepMind CEO Demis Hassabis, Google AI Chief Jeff Dean, and Stanford AI director Fei-Fei Li. In an informal survey, Ford asked each of them to guess by which year there will be at least a 50 percent chance of AGI being built.
Of the 23 people Ford interviewed, only 18 answered, and of those, only two went on the record. Interestingly, those two individuals provided the most extreme answers: Ray Kurzweil, a futurist and director of engineering at Google, suggested that by 2029, there would be a 50 percent chance of AGI being built, and Rodney Brooks, roboticist and co-founder of iRobot, went for 2200. The rest of the guesses were scattered between these two extremes, with the average estimate being 2099 â€” 81 years from now.
Ford says that his interviews also revealed an interesting divide in expert opinion â€” not regarding when AGI might be built, but whether it was even possible using current methods.
Some of the researchers Ford spoke to said we have most of the basic tools we need, and building an AGI will just require time and effort. Others said weâ€™re still missing a great number of the fundamental breakthroughs needed to reach this goal. Notably, says Ford, researchers whose work was grounded in deep learning (the subfield of AI thatâ€™s fueled this recent boom) tended to think that future progress would be made using neural networks, the workhorse of contemporary AI. Those with a background in other parts of artificial intelligence felt that additional approaches, like symbolic logic, would be needed to build AGI. Either way, thereâ€™s quite a bit of polite disagreement.
â€œSome people in the deep learning camp are very disparaging of trying to directly engineer something like common sense in an AI,â€ says Ford. â€œThey think itâ€™s a silly idea. One of them said it was like trying to stick bits of information directly into a brain.â€
"Many experts say weâ€™re missing key building blocks to create AGI"
All of Fordâ€™s interviewees noted the limitations of current AI systems and mentioned key skills theyâ€™ve yet to master. These include transfer learning, where knowledge in one domain is applied to another, and unsupervised learning, where systems learn without human direction. (The vast majority of machine learning methods currently rely on data that has been labeled by humans, which is a serious bottleneck for development.)
Interviewees also stressed the sheer impossibility of making predictions in a field like artificial intelligence where research has come in fits and spurts and where key technologies have only reached their full potential decades after they were first discovered.