Overcoming AI’s Limitations to Reach True Understanding

Share

It is not an exaggeration to say that artificial intelligence (AI) is everywhere. In the business world, AI is an undeniable force, driving disruption and transforming industries as diverse as manufacturing, finance, retail, and healthcare. Google, Alexa, and Siri, meanwhile, have now become an almost indispensable part of everyday life for anyone with a smartphone or a computer (and let’s face it, that’s just about everyone).

While AI is already delivering all sorts of benefits today, its potential to shape the future is perhaps even greater. It’s the reason so many companies are focused on scaling up AI-based innovations. It’s also the reason some firms, such as Microsoft and OpenAI, are spending so much time working to advance AI to the next level, or Artificial General Intelligence (AGI) – the ability of an intelligent agent to actually understand or learn any intellectual task that a human can.

But while most of the AI research currently being conducted is gambling that AGI represents the future of AI, very little of it is focused on developing a basic, underlying AGI technology that replicates the contextual understanding of humans. Instead, today’s research depends for the most part on an intelligence model that possesses varying degrees of specificity and relies on today’s AI algorithms. Sadly, that dependence means that, at best, AI only appears to be intelligent. It is still reliant on predetermined scripts and thousands or millions of training samples. It still doesn’t comprehend that words and images represent physical things existing in a physical universe. And it still doesn’t understand the concept of time or the idea that causes have effects.

To attain true AI understanding, researchers must shift their attention from building bigger data sets to replicating the contextual understanding of humans. Consider for a moment the intelligence displayed by the average three-year-old child. There is no contest in terms of what each can accomplish, right? AI is programmed to do all sorts of things, from routine Google searches to providing directions to predicting the likelihood of success for start-up businesses. The three-year-old, on the other hand, can’t do any of those things.

The child, however, can do the one thing that AI can’t. No matter how sophisticated it is, today’s AI is incapable of displaying any real form of intelligence. It does not have the situational awareness or contextual understanding of that three-year-old. Perhaps more importantly, it lacks the one thing that every three-year-old can do: grow to become a four-year-old, and then a five-year-old, and eventually a fully functioning, generally intelligent adult.

To get to that point, AI must overcome the limitations of current AI research and its dependence on analyzing massive data sets which constantly look for patterns and correlations without understanding any of the data being processed. In other words, it must add actual real-world understanding and everything that encompasses. But how?

Let’s revisit our three-year-old. Even though the child likely doesn’t realize it, he or she merges information from multiple senses to learn about simple tasks, such as stacking blocks. The child learns, for example, that blocks are physical things which exist in a three-dimensional world that is subject to physical laws; that square blocks can be stacked but round ones can’t; that careful stacking can build a taller stack. The child comprehends the passage of time and causality – blocks must be stacked before they can be knocked down.

These seemingly basic activities raise a fundamental question: Is it possible for AI to also understand time, space, and causality if it has never actually experienced them? To state it another way, can AI ever have common sense – an area of intelligence which current AI lacks?

Right now, the answer to both of these questions is no. But there is hope. To get to a point in which AI is approaching true understanding, it must be able to explore and experiment in the real world with real objects just like that three-year-old does. To get there, AI’s computational system should more closely resemble the biological processes found in the human brain, while its algorithms should enable it to build abstract “things” with lists of connections, rather than the vast arrays, training sets, and computer power needed by today’s AI.

From there, this unified knowledge base must be integrated with mobile sensory pods containing modules for sight, hearing, motion, and speech. These pods will allow the entire system to experience rapid sensory feedback with each of the actions it takes. This, in turn, will create an entire, end-to-end system that can begin to learn, understand, and ultimately, work better with people – as opposed to performing specific tasks for people – as it approaches true AGI.

The good news is that the robotics technology providing the needed visual, manipulation, and touch sensors to make this system work is readily available. The key to attaining AGI, however, lies not in the sensory pods but in the control AI’s computational system exerts over those pods. Together, they enable the same kind of exploration, experimentation, and learning that our three-year-old exhibits in his/her environment.

Ultimately, this AGI will create connections on its own between different types of real-world sensory input (such as sight, sound, and touch) in the same way that a human brain interprets everything it knows in the context of everything else it knows. The knowledge it acquires can subsequently be used for other applications, including personal assistants like Alexa and Siri, automated customer service systems, and even other robots.

Only when the AI community recognizes the need for AI to replicate the contextual understanding of the human brain, thereby enabling AI to attain true intelligence and understanding, will AGI be able to emerge, completely transforming the AI industry as it does.

The post Overcoming AI’s Limitations to Reach True Understanding appeared first on JAXenter.

Source : JAXenter