Voices in AI – Episode 67: A Conversation with Amir Khosrowshahi

Share
  • September 13, 2018

About this Episode

Episode 67 of Voices in AI features host Byron Reese and Amir Khosrowshahi talk about the explainability, privacy, and other implications of using AI for business. Amir Khosrowshahi is VP and CTO at Intel. He holds a Bachelor’s Degree from Harvard in Physics and Math, a Master’s Degree from Harvard in Physics, and a PhD in Computational Neuroscience from UC Berkeley.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. I’m Byron Reese. Today I’m so excited that my guest is Amir Khosrowshahi. He is a VP and the CTO of AI products over at Intel. He holds a Bachelor’s Degree from Harvard in Physics and Math, a Master’s Degree from Harvard in Physics, and a PhD in Computational Neuroscience from UC Berkeley. Welcome to the show, Amir.

Amir Khosrowshahi: Thank you, thanks for having me.

I can’t imagine someone better suited to talking about the kinds of things we talk about on this show, because you’ve got a PhD in Computational Neuroscience, so, start off by just telling us what is Computational Neuroscience?

So neuroscience is a field, the study of the brain, and it is mostly a biologically minded field, and of course there are aspects of the brain that are computational and there’s aspects of the brain that are opening up the skull and peering inside and sticking needles into areas and doing all sorts of different kinds of experiments. Computational neuroscience is a combination of these two threads, the thread that there [are] computer science statistics and machine learning and mathematical aspects to intelligence, and then there’s biology, where you are making an attempt to map equations from machine learning to what is actually going on in the brain.

I have a theory which I may not be qualified to have and you certainly are, and I would love to know your thoughts on it. I think it’s very interesting that people are really good at getting trained with a sample size of one, like draw a made up alien you’ve never seen before and then I can show you a series of photographs, and even if that alien’s upside down, underwater, behind a tree, whatever, you can spot it.

Further, I think it’s very interesting that people are so good at transfer learning, I could give you two objects like a trout swimming in a river, and that same trout in a jar of formaldehyde in a laboratory and I could ask you a series of questions: Do they weigh the same, are they the same color, do they smell the same, are they the same temperature? And you would instantly know, and yet, likewise, if you were to ask me if hitting your thumb with a hammer hurts, and I would say “yes,” and then somebody would say, “Well, have you ever done it?” And I’m like, “yeah,” and they would say, “when?” And it’s like, I don’t really remember, I know I have. Somehow we take data and throw it out, and remember metadata and yet the fact a hammer hurts your thumb is stored in some little part of your brain that you could cut it out and somehow forget that. And so when I think of all of those things that seem so different than computers to me, I kind of have a sense that human intelligence doesn’t really tell us anything about how to build artificial intelligence. What do you say?

Okay, those are very deep questions and actually each one of those items is a separate thread in the field of machine learning and artificial intelligence. There are lots of people working on things, so the first thing you mentioned I think, was one shot learning where you have, you see as something that’s novel. From the first time you see it, you recognize it as something that’s singular and you retain that knowledge to then identify if it occurs again—such as for a child it would be like a chair, for you it’s potentially an alien. So, how do you learn from single examples?

That’s an open problem in machine learning and is very actively studied because you want to be able to have a parsimonious strategy for learning and the current ways that—it’s a good problem to have—the current ways that we’re doing learning in, for example, online services that sort photos and recognize objects and images. It’s very computationally wasteful and it’s actually wasteful in usage of data. You have to see many examples of chairs to have an understanding of a chair, and it’s actually not clear if you actually have an understanding of a chair, because the models that we have today for chairs, they do make mistakes. When you peer into where the mistakes were made, it seems like there the machine learning model doesn’t actually have an understanding of a chair, it doesn’t have a semantic understanding of a scene or of grammar, or of languages that are translated, and we’re noticing these efficiencies and we’re trying to address them.

You mentioned some other things, such as how do you transfer knowledge from one domain to the next. Humans are very good at generalizing. We see an example of something in one context, and it’s amazing that you can extrapolate or transfer it to a completely different context. That’s also something that we’re working on quite actively, and we have some initial success in that we can take a statistical model that was trained on one set of data and then we can then apply to another set of data by using that previous experience as a warm start, and then moving away from that old domain to the new domain. This is also possible to do in continuous time.

Much of the things we experience in the real world—they’re not stationary, and that’s a statistics change with time. We need to have models that can also change. For a human it’s easy to do that, it’s very good at going from… it’s good at handling non-stationary statistics, so we need to build that into our models, be cognizant of it, we’re working on it. And then [for] other things you mentioned—that intuition is very difficult. It’s potentially one of the most difficult things for us to translate from human intelligence to machines, and remembering things and having kind of a hazy idea of having done something bad to yourself with a hammer, that I’m not actually sure where that falls in into the various subdomains of machine learning.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source : Voices in AI – Episode 67: A Conversation with Amir Khosrowshahi