Summary: Looking beyond today’s commercial applications of AI, where and how far will we progress toward an Artificial Intelligence with truly human-like reasoning and capability? This is about the pursuit of Artificial General Intelligence (AGI).
There is no question that we’re making a lot of progress in artificial intelligence (AI). So much so that we are rapidly approaching or have already arrived at a plateau in development where more effort is being put into commercializing existing AI capabilities than in improving it.
As far back as November 2014 Kevin Kelly, cofounder of Wired magazine and prolific futurist observed “The business plans of the next 10,000 startups are easy to forecast: Take X and add AI.” Well Kevin, you’re right. That day has arrived. Hundreds if not thousands of companies both major and start up are piling in to add AI to everything, our living rooms, our light bulbs, our cars, dating, wealth management, doing our taxes, the list goes on. Without any further improvement in where we are today, this could be as significant as the electrification of America in the 20’s.
And yet … and yet we hesitate to call our smart robots of today true thinking machines. As we wrote a few weeks ago in our article “The Data Science Behind AI” the current capabilities of our AI enhanced robot are looking pretty complete. They can:
See: Still and video image recognition.
Hear: Receive input via text or spoken language.
Speak: Respond meaningfully to our input – same language or any language.
Make human-like decisions: Offer advice or new knowledge.
Learn: Change behavior based on changes in its environment.
Move and manipulate physical objects.
But for all these advances there are none among us who would say that this yet equals human-like thought or capability.
In the field of AI, achieving human comparability is known as Artificial General Intelligence (AGI).
How Will We Know When AGI Has Been Achieved
Going back 67 years to the Turing Test a variety of tests designed to determine AGI have been proposed.
The Turing Test: Proposed 67 years ago (1950) by Alan Turing an early innovator in AI. He proposed that we would know that AGI had arrived when a computer communicating conversationally (over keyboard – it’s 1950 after all) could convince the human that it was also human.
But over the years as our understanding of what AI might yet become expanded, other thinkers have proposed more rigorous tests.
The Coffee Test: Steve Wozniak, cofounder of Apple observed in an article in 2007 that he would believe AI had arrived when a robot could enter a strange house and make a decent cup of coffee. Sound flippant but the skills involved actually make a pretty good test. This test is actually credited to AI researcher Ben Goetzel who wrote it down in 2012 but he in turn credited it to Wozniak.
The Robot College Student: Ben Goetzel went on to refine his set of requirements also in 2012 when he proposed that we would know we have AI when a robot could enroll in a college and earn a degree using the same resources and methods as a human. By the way, a Chinese robot is set to compete with grade 12 students during the country’s national college entrance examination in 2017 with the goal to get a score qualifying it to enter first-class universities. We’ll have to wait and see how that turns out.
The Employment Test: Finally, Nils Nilson, one of the founders of the AI movement proposed that the requirement should be a robot that could potentially automate economically important jobs. He then went on to list about 22 jobs he thought were ripe for exploitation. These ranged from Meeting Planner, Financial Examiner, and Computer Programmer to Home Health Aide, Paralegal, and Marriage Counselor.
Perhaps most controversial of all is that both Goetzel and Nilson make specific reference to the fact that if their tests are met this would be proof of robot ‘consciousness’.
What’s the Gap Between Current AI and True AGI
There are a number of capabilities that our current second generation AI doesn’t yet have, or at least in sufficient quantity. Our current AI systems can’t yet:
- Learn from one source and apply it another completely unrelated field. They can’t generalize at the same level as humans.
- Remember. That is recall a task once learned and again, apply it to other data or other environments.
- Be miniaturized. Today’s systems are very energy hungry which stands in the way of making them tiny.
- Learn in a truly unsupervised manner. All our AI data science technologies require large amounts of training data, almost always labeled data.
However, you must admit that at some level these sound like incremental problems that will yield over time. Take the emerging field of Spiking Neural Nets (SNNs) also known as neuromorphic computing. It promises to resolve all of these issues. And if that happens, will our robots then possess AGI? There is disagreement over the path to attaining AGI.
Top Down, Bottom Up, or Meet in the Middle
These short descriptors do a pretty good job of describing the camps into which different researchers fall in pursuit of AGI.
Top Down
Top Down is an extension of our current incremental engineering approach. Basically it says that once the sum of all these engineering problems is resolved the resulting capabilities will in fact be AGI. Those who disagree however say that truly human-like intelligence can never be the result of simply adding up a group of specific algorithms. Human intelligence could never be reduced to the sum of mathematical parts and neither can AGI.
Bottom Up
Bottom Up is the realm of researchers who propose to build an electronic analogue of the entire human brain. They propose to build an all-purpose generalized platform based on an exact simulation of human brain function. Once it’s available, it will immediately be able to do everything our current piecemeal approach has accomplished and much more.
Bottom Up isn’t only a hardware problem but hardware seems to come up as the greatest obstacle. Yet when different experts evaluate the rate at which our computer capability is expanding they variously end up with dates in the range of 2025 to 2040 as the outside dates when sufficient computing power will be reasonably available to create a brain analogue.
As for the wetware (er, software), spiking neural nets and neuromorphic computing models are in their infancy but they do meet the test of working exactly as neurons in the brain function. An interesting side note. At least one early SNN system called SPAUN has shown great promise but also tends to make exactly the sort of mistakes that test groups of humans make. This is taken as at least partial proof that it’s thinking like a human.
Meet in the Middle – The Golden Spike
Call this group the pragmatists. Hans Moravec, futurist and faculty member of the Robotics Institute of Carnegie Mellon wrote:
“I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way … Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts.”
This approach meets with much the same criticism as Top Down as being far to modular to ever achieve true AGI. As Stevan Harnad from Princeton put it in his 1990 paper:
“…nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer).”
Pros and Cons
It’s tempting to get behind the incremental engineering approach offered by Top Down or Meet in the Middle since that seems like a very direct and understandable path. Yet the argument that this would be just a mash up of various data science modules does not make human-like intelligence seem promising. Perhaps extremely capable robots, but not truly human-like.
Bottom Up and the creation of a true brain analogue is enticing. Hardware and software (emulation of neurons) seems like a doable goal. However we may yet be surprised by how much we don’t know about brain function.
Early research to model the brain of a honey bee seems to indicate that it may be necessary to emulate not only the brain but also the body. This is called the Extended Mind hypothesis and research into cephalopods has also demonstrated that the ‘mind’ of these creatures extends beyond their brain into a decentralized system throughout their bodies. It’s not unreasonable to think that something like this might also be true of humans.
Perhaps Even AGI Will Never Be Completely Human-Like
Whether you embrace this thought as a disappointment or as relief that AGI robots will never replace us there is essentially no discussion and no intent to bridge AGI into what philosophically makes us human.
In our science fiction, these AGI robots have some or all of these features:
Consciousness: To have subjective experience and thought.
Self-awareness: To be aware of oneself as a separate individual, especially to be aware of one’s own thoughts and uniqueness.
Sentience: The ability to feel perceptions or emotions subjectively.
Sapience: The capacity for wisdom.
There’s very little in the literature about this potential dimension of AGI and since these aren’t the functional drivers that make your self-driving car or your robot work mate, these aren’t valuable commercial goals for AGI. Were they to arise however they could be troublesome from a moral and ethical standpoint. Some have suggested that these characteristics might give rise to legal rights similar to those we extend to non-human animals.
As to which of these approaches will win out we’ll have to wait and see. So long as eager young researchers and of course money flow into each of these approaches there will be a race until a clear leader is identified.
About the author: Bill Vorhies is Editorial Director for Data Science Central and has practiced as a data scientist and commercial predictive modeler since 2001. He can be reached at: