Voices in AI – Episode 89: A Conversation with Doug Lenat

Share
  • June 13, 2019

[voices_in_ai_byline]

About this Episode

Episode 89 of Voices in AI features Byron speaking with Cycorp CEO Douglas Lenat on developing AI and the very nature of intelligence.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. I couldn’t be more excited today. My guest is Douglas Lenat. He is the CEO of Cycorp of Austin, Texas where GigaOm is based, and he’s been a prominent researcher in AI for a long time. He’s been awarded the biannual IJCAI computer and thought award in 1976. He created the machine learning program AM. He worked on (symbolic, not statistical) machine learning with his AM and Eurisko programs, knowledge representation, cognitive economy, blackboard systems and what he dubbed in 1984 as “ontological engineering.”

He’s worked in military simulations, numerous projects for the government for intelligence, with scientific organizations. In 1980 he published a critique of conventional random mutation Darwinism. He authored a series of articles in The Journal of Artificial Intelligence exploring the nature of heuristic rules. But that’s not all: he was one of the original Fellows of the Triple AI. And he’s the only individual to observe on the scientific advisory board of both Apple and Microsoft. He is a Fellow of the Triple AI and the cognitive science society, one of the original founders of TTI/ Vanguard in 1991. And on and on and on… and he was named one of the WIRED 25. Welcome to the show!

Douglas Lenat: Thank you very much Byron, my pleasure.

I have been so looking forward to our chat and I would just love, I mean I always start off asking what artificial intelligence is and what intelligence is. And I would just like to kind of jump straight into it with you and ask you to explain, to bring my listeners up to speed with what you’re trying to do with the question of common sense and artificial intelligence.

I think that the main thing to say about intelligence is that it’s one of those things that you recognize it when you see it, or you recognize it in hindsight. So intelligence to me is not just knowing things, not just having information and knowledge but knowing when and how to apply it, and actually successfully applying it in those cases. And what that means is that it’s all well and good to store millions or billions of facts.

But intelligence really involves knowing the rules of thumb, the rules of good judgment, the rules of good guessing that we all almost take for granted in our everyday life in common sense, and that we may learn painfully and slowly in some field where we’ve studied and practiced professionally, like petroleum engineering or cardiothoracic surgery or something like that. And so common sense rules like: bigger things can’t fit into smaller things. And if you think about it, every time that we say anything or write anything to other people, we are constantly injecting into our sentences pronouns and ambiguous words and metaphors and so on. We expect the reader or the listener has that knowledge, has that intelligence, has that common sense to decode, to disambiguate what we’re saying.

So if I say something like “Fred couldn’t put the gift in the suitcase because it was too big,” I don’t mean the suitcase was too big, I must mean that the gift was too big. In fact if I had said “Fred can’t put the gift in the suitcase because it’s too small” then obviously it would be referring to the suitcase. And there are millions, actually tens of millions of very general principles about how the world works: like big things can’t fit into smaller things, that we all assume that everybody has and uses all the time. And it’s the absence of that layer of knowledge which has made artificial intelligence programs so brittle for the last 40 or 50 years.

My number one question I ask every [AI is a] Turing test sort of thing, [which] is: what’s bigger a nickel or the sun? And there’s never been one that’s been able to answer it. And that’s the problem you’re trying to solve.

Right. And I think that there’s really two sorts of phenomena going on here. One is understanding the question and knowing the sense in which you’re talking about ‘bigger.’ One in the sense of perception if you’re holding up a nickel in front of your eye and so on and the other of course, is objectively knowing that the sun is actually quite a bit larger than a typical nickel and so on.

And so one of the things that we have to bring to bear, in addition to everything I already said, are Grice’s rules of communicating between human beings where we have to assume that the person is asking us something which is meaningful. And so we have to decide what meaningful question would they really possibly be having in mind like if someone says “Do you know what time it is?” It’s fairly juvenile and jerky to say “yes” because obviously what they mean is: please tell me the time and so on. And so in the case of the nickel and the sun, you have to disambiguate whether the person is talking about a perceptual phenomenon or an actual unstated physical reality.

So I wrote an article that I put a lot of time and effort into and I really liked it. I ran it on GigaOm and it was 10 questions that Alexa and Google Home answered differently but objectively. They should have been identical, and in every one I kind of tried to dissect what went wrong.

And so I’m going to give you two of them and my guess is you’ll probably be able to intuit in both of them what the answer, what the problem was. The first one was: who designed the American flag? And they gave me different answers. One said “Betsy Ross,” and one said “Robert Heft,” so why do you think that happened?

All right so in some sense, both of them are doing what you might call an ‘animal level intelligence’ of not really understanding what you’re asking at all. But in fact doing the equivalent of (I won’t even call it natural language processing), let’s call it ‘string processing,’ looking at processed web pages, looking for the confluence, and preferably in the same order, of some of the words and phrases that were in your question and looking for essentially sentences of the form: X designed the U.S. flag or something.

And it’s really no different than if you ask, “How tall is the Eiffel Tower?” and you get two different answers: one based on answering from the one in Paris and one based on the one in Las Vegas. And so it’s all well and good to have that kind of superficial understanding of what it is you’re actually asking, as long as the person who’s interacting with the system realizes that the system isn’t really understanding them.

It’s sort of like your dog fetching a newspaper for you. It’s something which is you know wagging its tail and getting things to put in front of you, and then you as the person who has intelligence has to look at it and disambiguate what does this answer actually imply about what it thought the question was, as it were, or what question is it actually answering and so on.

But this is one of the problems that we experienced about 40 years ago in artificial intelligence in the in the 1970s. We built AI systems using what today would be very clearly a neural net technology. Maybe there’s been one small tweak in that field that’s worth mentioning involving additional hidden layers and convolution, and we built a AIs using symbolic reasoning that used logic much like our Cyc system does today.

And again the actual representation looks very similar to what it does today and there had to be a bunch of engineering breakthroughs along the way to make that happen. But essentially in the 1970s we built AIs that were powered by the same two sources of power you find today, but they were extremely brittle and they were brittle because they didn’t have common sense. They didn’t have that kind of knowledge that was necessary in order to understand the context in which things were said, in order to understand the full meaning of what was said. They were just superficially reasoning. They had the veneer of intelligence.

We might have a system which was the world’s expert at deciding what kind of meningitis a patient might be suffering from. But if you told it about your rusted out old car or you told it about someone who is dead, the system would blithely tell you what kind of meningitis they probably were suffering from because it simply didn’t understand things like inanimate objects don’t get human diseases and so on.

And so it was clear that somehow we had to pull the mattress out of the road in order to let traffic toward real AI proceed. Someone had to codify the tens of millions of general principles like non humans don’t get human diseases, and causes don’t happen before their effects, and large things don’t fit into smaller things, and so on, and that it was very important that somebody do this project.

We thought we were actually going to have a chance to do it with Alan Kay at the Atari research lab and he assembled a great team. I was a professor at Stanford in computer science at the time, so I was consulting on that, but that was about the time that Atari peaked and then essentially had financial troubles as did everyone in the video game industry at that time, and so that project splintered into several pieces. But that was the core of the idea that somehow someone needed to collect all this common sense and represent it and make it available to make our AIs less brittle.

And then an interesting thing happened: right at that point in time when I was beating my chest and saying ‘hey someone please do this,’ which was America was frightened to hear that the Japanese had announced something they called the ‘fifth generation computing effort.’ Japan basically threatened to do in computing hardware and software and AI what they had just finished doing in consumer electronics, and in the automotive industry: namely wresting leadership away from the West. And so America was very scared.

Congress passed something that’s how you can tell it was many decades ago. Congress quickly passed something, which was called the National Cooperative Research Act, which basically said ‘hey all you large American companies: normally if you colluded on R & D, we would prosecute you for antitrust violations, but for the next 10 years, we promise we won’t do that.’ And so around 1981 a few research consortia sprang up in the United States for the first time in computing and hardware and artificial intelligence and the first one of those was right here in Austin. It was called MCC, the Microelectronics and Computer Technology Corporation. Twenty five large American companies each contributed a small number of millions of dollars a year to fund high risk, high payoff, long term R & D projects, projects that might take 10 or 20 or 30 or 40 years to reach fruition, but which, if they succeeded, could help keep America competitive.

And Admiral Bob Inman who’s also an Austin resident, one of my favorite people, one of the smartest and nicest people I’ve ever met, was the head of MCC and he came and visited me at Stanford and said “Hey look Professor, you’re making all this noise about what somebody ought to do. You have six or seven graduate students. If you do that here if it’s going to take you a few thousand person years. That means it’s going to take you a few hundred years to do that project. If you move to the wilds of Austin, Texas and we put in ten times that effort, then you’ll just barely live to see the end of it a few decades from now.”

And that was a pretty convincing argument, and in some sense that is the summary of what I’ve been doing for the last 35 years here is taking time off from research to do an engineering project, a massive engineering project called Cycorp, which is collecting that information and representing it formally, putting it all in one place for the first time.

And the good news is since you’ve waited thirty five years to talk to me Byron, is that we’re nearing completion which is a very exciting phase to be in. And so most of our funding these days at Cycorp doesn’t come from the government anymore, doesn’t come from just a few companies anymore, it comes from a large number of very large companies that are actually putting our technology into practice, not just funding it for research reasons.

So that’s big news. So when you have it all, and to be clear, just to summarize all of that: you’ve spent the last 35 years working on a system of getting all of these rules of thumb like ‘big things can’t go in small things,’ and to list them all out every one of them (dark things are darker than light things). And then not just list them like in an Excel spreadsheet, but to learn how to express them all in ways that they can be programmatically used.

So what do you have in the end when you have all of that? Like when you turn it on, will it tell me which is bigger: a nickel or the sun?

Sure. And in fact most of the questions that you might ask that you might think of as any one ought to be able to answer this question, Cyc is actually able to do a pretty good job of. It doesn’t understand that unrestricted natural language, so sometimes we’ll have to encode the question in logic in a formal language, but the language is pretty big. In fact the language has about a million and a half words and of those, about 43,000 are what you might think of as relationship type words: like ‘bigger than’ and so on and so by representing all of the knowledge in that logical language instead of say just collecting all of that in English, what you’re able to do is to have the system do automatic mechanical inference, logical deduction, so that if there is something which logically follows from one or two or 2,000 statements, then Cyc (our system) will grind through automatically and mechanically come up with that entailment.

And so this is really the place where we diverge from everyone else in AI who’s either satisfied with machine learning representation, which is sort of very shallow, almost stimulus response pair-type representation of knowledge; or people who are working in knowledge graphs and triple and quad stores and what people call ontology is these days, and so on which really are almost, you can think of them like three or four word English sentences and there are an awful lot of problems you can solve, just with machine learning. T

There is an even larger set of problems you can solve with machine learning, plus that kind of taxonomic knowledge representation and reasoning. But in order to really capture the full meaning, you really need an expressive logic: something that is as expressive as English. And think in terms of taking one of your podcasts and forcing it to be rewritten as a series of three word sentences. It would be a nightmare. Or imagine taking something like Shakespeare’s Romeo and Juliet, and trying to rewrite that as a set of three or four word sentences. It probably could theoretically be done, but it wouldn’t be any fun to do and it certainly wouldn’t be any fun to read or listen to, if people did that. And yet that’s the tradeoff that people are making. The tradeoff is that if you use that limited a logical representation, then it’s very easy and well understood to efficiently, very efficiently, do the mechanical inference that’s needed.

So if you represent a set is a type of relationships, you can combine them and chain them together and conclude that a nickel is a type of coin or something like that. But there really is this difference between the expressive logics that have been understood by philosophers for over 100 years starting with Frege, and Whitehead and Russell and so on and and others, and the limited logics that others in AI are using today.

And so we essentially started digging this tunnel from the other side and said “We’re going to be as expressive as we have to and we’ll find ways to make it efficient,” and that’s what we’ve done. That’s really the secret of what we’ve done is not just be massive on codification and formalization of all of that common sense knowledge, but finding what turned out to be about 1100 tricks and techniques for speeding up the inferring, the deducing process so that we could get answers in real time instead of involving thousands of years of computation.

Listen to this episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source : Voices in AI – Episode 89: A Conversation with Doug Lenat