Elizabeth Nelson
Elizabeth Nelson

What Makes a Machine Intelligent?

by Jackie Swift

Artificial intelligence (AI) is a mainstay of science fiction. From the rogue computer HAL in 2001: A Space Odyssey, to Iron Man Tony Stark’s stalwart assistant, JARVIS, the potential and the peril of AI has been imagined and explored for decades. But how close are we to these fictional presentations?

“In some sense, the field of computer science has, for a long time, been really far behind these grand expectations for artificial intelligence that you see in movies,” says Kilian Q. Weinberger, Computer Science. “Only lately, for better or worse, have we caught up with some of this stuff that’s been promised in science fiction.”

Weinberger is an expert on machine learning, the algorithm-driven processes that enable AI to analyze and draw inferences from patterns in data. “Machine learning tries to answer the question of how to make computers learn from experience,” he says. “Instead of writing a concrete software program of how exactly to do something, which is the traditional way to program computers, I write a program that can learn, and then I show it examples of what I want it to do. It’s a very different approach.”

AI All Around Us

Machine learning is used in many day-to-day applications we take for granted—from the phone cameras we rely on to identify and optimize human faces in the photos we take, to spam filters that capture objectionable email before it hits our inboxes.

“In the case of something like face recognition, it’s very hard to write a program using the traditional approach where I would have to say exactly what a face looks like,” says Weinberger. “I could say, ‘It has two eyes and a nose and a mouth,’ but what if someone has only one eye? Or what if they have an eye patch or glasses or they’re wearing a hat? And then, of course, there is the question of what an eye or a nose looks like.”

Rather than explicitly describing the attributes of a human face, Weinberger would write a learning algorithm that detects patterns. He would then show it examples: 10,000 images with faces in them clearly identified, followed by 100,000 images in which no faces appear, he explains. “The algorithm would try to find patterns that are present on faces and not present on not-face images,” he says. “By seeing more and more examples, it would get better and better at identifying faces.”

Perception for Self-Driving Cars

Weinberger has applied his interest in machine learning to the question of perception in autonomous vehicles. He has joined with Mark E. Campbell, Mechanical and Aerospace Engineering; Bharath Hariharan, Computer Science; and Wei-Lun (Harry) Chao, Ohio State University, who previously worked with Weinberger and Campbell at Cornell as a postdoctoral associate. Weinberger and his collaborators are designing a series of algorithms—known as neural network architectures—that can detect objects in three dimensions for self-driving cars.

“Instead of writing a concrete software program of how exactly to do something ... I write a program that can learn, and then I show it examples of what I want it to do.”

“A self-driving car has to detect other cars, pedestrians, cyclists, stop signs: all kinds of things,” Weinberger says. “Not only that, but it has to localize them; it has to know where they are so it can plan to drive around them. There are all kinds of design constraints. The perception has to be very fast. It’s not okay if the car runs a pedestrian over and afterward it realizes, ‘That was definitely a pedestrian.’ It has to know beforehand, and it can’t be wrong.”

Currently self-driving cars use Light Detection and Ranging (LiDAR) sensors to measure distance to objects. A laser, mounted on top of the car, pulses light in a circle and measures how far away objects are by how long it takes for the beam to reflect off them, generating a point cloud of surrounding objects. LiDAR is very safe, but also extremely expensive. “It costs upwards of $10 thousand per car,” Weinberger says.

Using a car Campbell had outfitted with two cameras positioned a meter apart that simulate the stereo vision of human eyes, Weinberger and his collaborators tackled the problem of perception. The researchers were able to develop a pseudo-LiDAR that uses data from the cameras to generate a point cloud similar to that of a LiDAR at a fraction of the cost. “If an object is close, it’s not in the same location in the two cameras,” Weinberger explains. “From that offset, you can compute how far away the object is. We can tell where a pedestrian is, for example, and put a box around them and say to the algorithm, ‘Inside the box is the pedestrian, so drive around the box.’”

While the idea of mounting two cameras for stereo vision has been around for a while, no one knew how to design neural networks that could use the data generated by the cameras to detect objects. In a series of papers—starting with a widely reported publication in 2018—Weinberger and his collaborators showed how it could be done. “There’s a fair amount of problems to getting a self-driving car on the road, and the research from our paper may have removed one of the roadblocks,” Weinberger says.

Countering AI Overconfidence

Turning his attention to another AI problem, Weinberger is seeking a way to counter the inherent overconfidence of neural networks. “They learn the example data given to them until they make no more mistakes, and once that happens they are convinced they are always right,” he says.

This becomes problematic for decision making. For instance, if a surgeon is relying on an AI to diagnose a tumor and the AI says it is cancer, the surgeon needs to know how certain the diagnosis is. “If the algorithm tells you it’s 55 percent sure that the tumor is cancer, maybe you don’t want to do the surgery yet,” Weinberger says. “But if the algorithm is 99.9 percent sure, then maybe go ahead and do it immediately.”

To address this issue, Weinberger and his collaborators are attempting to create algorithms that produce well-calibrated output probabilities so that the AI is able to recognize and quantify its own degree of accuracy. “We’re working on fundamental approaches that people can use for any application,” Weinberger says. The possibilities range from medical applications to self-driving cars to facial recognition and beyond.

Fulfilling a Dream

Weinberger’s fascination with AI goes all the way back to his childhood experiences, when popular culture showed him a thrilling array of possibilities dependent on artificial intelligence. “Who doesn’t dream of building an AI?” he asks. “It’s really cool. I think my interest in artificial intelligence is as simple as that.”