Voices in AI – Episode 68: A Conversation with Suju Rajan

Share
  • September 20, 2018

About this Episode

Episode 68 of Voices in AI features host Byron Reese and Suju Rajan discuss differences in machine and human learning as well as where machines could take us in advertising, privacy and medicine. Suju has a PhD in Machine Learning from the University of Texas. Dr. Rajan is also currently the head of research at Criteo.

Visit www.VoicesinAI.com to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm and I’m Byron Reese. Today I’m so excited our guest is Suju Rajan. She is the head of research over at Criteo, and she holds a PhD in Machine Learning from the University of Texas. Welcome to the show.

Suju Rajan: Great to be here, Byron.

You know, we’re based in Austin, so I drive by your alma mater every day almost, so it’s kind of like a hometown interview.

That’s pretty cool. Go Longhorns!

We’re recording this in August and you picked a good time not to be here, you know?

I can imagine. I think when I graduated they actually were at the Rose Bowl (not that they actually won it), so I’m happy I was there at the right time.

There you go. So I always like to start with the simple question: what is artificial intelligence, or if you prefer, what is intelligence?

Let’s go with artificial intelligence, because I don’t think I’m quite qualified to answer what is intelligence overall. Let’s say the classical definition of artificial intelligence, what can I say, it’s more ‘textbook,’ right? So this is where the whole field started off a few decades ago in fact, where the goal was to create intelligence in machines, which was comparable to human level intelligence, and what does that mean? What do we think when we say someone is intelligent, right? So it is the ability for us to reason, to be able to extrapolate to situations that we hadn’t been in before, and to come out of it relatively unscathed in some sense. So, the ability to reason, to make sort of facile decisions, to be able to solve a longer term problem than just the task at hand, and the relative information to do this is what I think is the standard of artificial intelligence.

So that’s a really high bar, because a simple definition is: ‘systems that respond to their environments.’

So let me take it a step down, so that’s to your point, my high bar. Today the way artificial intelligence is being used overall in media and maybe in some portions of even the community is the ability to perform really well at certain specific tasks at a level that is comparable to what a human would do. Now nobody questions, ‘is it really human-like’? Because it’s within a really constrained environment within the space of the data the thing has been trained on. If you look at some of the tests that they’ve done, it’s in a very narrow domain. Now ‘do we all agree that that is artificial intelligence?’ becomes an interesting debate, but I want to say that the mainstream has focused a lot more on intelligence in very narrow specific tasks, but I wouldn’t call it artificial intelligence.

All right, so your particular area of study is a technique used in artificial intelligence, called machine learning. And machine learning, simply put, is: you take a lot of data about the past and you study it and you make predictions about the future, is that a fair oversimplification?

A fair oversimplification, yes.

And so the philosophic implication is that the future behaves like the past, and in a lot of cases, that’s what a cat looks like tomorrow is probably what a cat looked like yesterday. But what a cellphone looks like tomorrow is not what a cellphone looked like 10 years ago, right? And how Chess is played tomorrow is the same as it’s been played for 400 years, so, that’s a really good application of it. What are some good applications of AI and things that aren’t so good?

Okay, great question here again, so, I think you’ve sort of nailed the whole answer. So imagine that your goal is somewhat fixed, right? And we as humans know what that goal needs to be. So if you could figure out that all that you had the system to do, was to recognize cats in a picture—and this is a very, very well defined problem—maybe we mess up how we train the model. We are not careful to how it can be adapted and so forth, but within the scope of these sorts of problems, where the goal is really well defined.

Chess, for all its beauty, is still a constrained problem, right? There is a fixed space that you can explore and maybe I’m over trivializing this, but in some sense it’s a constrained problem. It’s here that we have made lots of good progress and at least the algorithms that we are inventing are enabling us to make lots of progress in that sphere.

Now what it is not good at is to be able to do a longer term task. So imagine that, there was this interesting problem that someone was talking to me about… If you wanted to graduate from a school with a good GPA or if you wanted to land a specific good job, now what is the set of courses that you would have to take, how would you have to perform, and so on and so forth? But the kind of data that we had to solve this particular problem through an AI system, it became so trivialized that it was almost laughable, the sorts of things that came out of it.

So in terms of a long term projection where the path is pretty fuzzy, and it really comes down to human experience and having to talk to bunches of people and constantly learning and readjusting and so on and so forth. These sorts of longer term goals in which the end state is not as clear, we have a long, long, long, long way to go.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

 

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source : Voices in AI – Episode 68: A Conversation with Suju Rajan