This week in our tech history course, we’re digging in deeper to one of the hottest topics in tech today: machine learning. How has this subset of artificial intelligence captured hearts, minds, and imaginations across the world? Generally, by failing in adorable ways.
Time to settle in and crack open those textbooks to chapter 8; tech history class is in session!
What is ML exactly?
So, before we get started, it’s important to get our terms right. There are lots of overlapping terms and phrases that can occasionally (but not always) be used interchangeably. AI, as we discussed last week, is the concept of simulated intelligence by machines.
Machine learning (or ML for short) is a subfield within AI. It uses statistical techniques to give computer systems the ability to “learn” (e.g., progressively improve performance on a specific task) from data, without being explicitly programmed.
Basically, by shoving lots of information at a neural net (a computing framework meant to replicate a biological neural system), the net develops its own rules to understand the information. With some helpful parameters and training, it is meant to develop a framework of its own to deal with things like image recognition.
Obviously, this is a vast oversimplification, but it represents a dramatic shift in how we approach artificial intelligence. Instead of “can computers think?”, we’re asking “can computers do the kinds of things that we can do?”. In some cases, yes. In others, well… no.
SEE ALSO: ML 101: The rules of machine learning
I spy with my little eye
Machine learning is really useful for very specific tasks. Although we can’t develop the next Jarvis yet, we can still find dark matter in deep space with ML. In particular, ML is good at things like image recognition, natural language processing, improving search engines, sentiment analysis, speech and handwriting analysis, and machine translation. This isn’t to say there aren’t missteps, but these are the kinds of things ML is good at.
Why? Well, tasks like image recognition rely on vast datasets of images that are already pre-tagged. With a deep enough dataset, your budding neural net should be capable of deciding whether that thing is a sheep or a cloud.
One of the most popular machine learning frameworks out there is TensorFlow. Developed by Google, TensorFlow is an open source library for numerical computation and large-scale machine learning. It’s used to train and run neural networks for a wide range of tasks from image recognition to machine translation and natural language processing.
TensorFlow creates dataflow graphs, a kind of structure that shows how data moves through a series of processing nodes. Every node in the graph represents a mathematical operation; every connection between nodes is a tensor. Although TensorFlow is written in Python, the actual math is executed in C++ binaries by the libraries.
Of course, there are lots of machine learning tools out there for developers to choose from, like PyTorch, CNTK, and Deeplearning4j. While each of these tools have their own advantages and disadvantages, there is an underlying issue with machine learning that requires a closer look.
Garbage in, garbage out
This leap towards near-human performance hasn’t been fueled by a breakthrough in cognition. By adding lots of data plus a few helpful parameters, we’ve discovered that a neural net can approximate the right answer. But we really don’t understand how it does this!
Instead of a linear flow of logic, machine learning tools like neural nets were designed after our own brains, a digital Gordian knot. Since it can perform in near-human levels in some domains like image recognition, there’s a temptation to believe that it works the same as our brains.
This is very, very wrong.
“We don’t really understand what the system learned. In fact, that’s its power. This is less like giving instructions to a computer; it’s more like training a puppy-machine-creature we don’t really understand or control,” Zeynep Tufekci explained in her TED talk.
“Our machine intelligence can fail in ways that don’t fit error patterns of humans, in ways we won’t expect and be prepared for.”
Since they follow their own logic, they can be tricked in ways that a human brain wouldn’t. Take image recognition. Maciej Cegłowski pointed out that you can break an image classifier by just adding a layer of static to an image. A human wouldn’t be fooled, but the image classifier sees a school bus as an ostrich.
In other ways, the machine is only as objective or moral as the humans coding it. There’s no ethics required for computer science, although the self-driving teams may bulldoze their way through the trolley problem soon enough. Datasets can be racist, sexist, homophobic or anti-Semitic. But since it’s coming from a machine, we can tell ourselves that it’s “objective, neutral computations” even if it’s validating humanity’s worst impulses.
The future of machine learning
Can machine learning move past this kind of barrier? There are a lot of smart people working on this problem.
Just this week, MIT announced they are building an entire college with a $1 billion commitment for artificial intelligence and machine learning. Not only focused on the computing aspects, major goals include teaching students to responsibly use and develop AI and computing technologies.
“As computing reshapes our world, MIT intends to help make sure it does so for the good of all,” said MIT President L. Rafael Reif.
This is only a first step, but hopefully we as a community can move forward for a more open, less alien approach to machine learning. Otherwise, the monster we are building might come back to haunt us in the future.
Miss a week of class? We’ve got your make-up work right here. Check out other chapters in our Know Your History series!
The post Know your history — Machine learning makes some computers smarter-ish appeared first on JAXenter.
Source : JAXenter