From RoboBee to REMUS to machines yet to come, Silvia Ferrari is taking artificial intelligence for autonomous vehicles to new heights.
Dave Burbank
Dave Burbank


"I have always liked artificial intelligence. My philosophy is to merge machine learning and reinforcement learning techniques with traditional engineering."
Dave Burbank
Dave Burbank


A primary facet of the research in Ferrari's lab concerns perception, which gives a vehicle the ability to understand its environment—imperative for machines that locate buried explosives or pollinate crops or search for earthquake victims.
Dave Burbank
Dave Burbank


Ferrari lab is perfecting a quarter-sized insect-like machine, RoboBee, for carrying out many kinds of land missions, and REMUS, a submarine type machine for hunting explosives in the ocean.
Beatrice Jin; Dave Burbank
Beatrice Jin; Dave Burbank


"Right now we're talking to Boeing…We can use a lot of what we're doing to develop autonomous aircraft taxying, which is something Boeing's interested in."
Dave Burbank
Dave Burbank

Autonomous Vehicles for Air, Land, and Sea

by Jackie Swift

Decades ago we first envisioned machines run by artificial intelligence as helpers for humanity. They would carry out the tasks we didn’t want to do; they would go where we couldn’t; and they would be safe and reliable, able to react to the environment and cope logically with unforeseen circumstances. The future is now here, and autonomous machines with artificial intelligence are becoming a reality, thanks in part to Silvia Ferrari, Mechanical and Aerospace Engineering, and her lab, known as the Laboratory for Intelligent Systems and Controls (LISC).

Ferrari develops new theory and methods of learning, control, and computational intelligence for autonomous systems. “I have always liked artificial intelligence,” she says. “My philosophy is to merge machine learning and reinforcement learning techniques with traditional engineering. I am also really interested in pathfinding computational geometry methods that stem from computer science. I like to keep the reliability and theoretical guarantees for the designs we develop but also try to introduce that ability humans have to cope with unforeseen circumstances.”

Machines with Perception and Control for Maneuvers with No Human Intervention

Ferrari and the LISC are working on a multitude of projects, many of them focusing on autonomous sensor technologies installed on ground, air, or water vehicles. Their core research question deals with control: How do you automate a process so that the system can operate without human intervention for a long period of time? “Much of my research focuses on an aspect of autonomous systems known as perception, which is any sensing-related process by which the system understands its environment,” Ferrari says. “Perception is necessary for safe navigation, but a lot of these vehicles and sensors are utilized not just for vehicle navigation but to carry out some other mission—for example, finding targets in search and rescue, or locating buried mines for landmine detection.”

In one project that sounds as if it’s straight out of science fiction, Ferrari is collaborating with Robert J. Wood from Harvard University on an insect-scale unmanned autonomous vehicle. Dubbed the RoboBee, the machine is the size of a quarter and fitted out with a host of very small sensors, including artificial retinas. Currently the RoboBee can carry out simple maneuvers such as hovering and perching, but the researchers envision it doing much more—for instance, flying through rubble after an earthquake to search for survivors or pollinating crops. To facilitate the RoboBee’s autonomous activities, the LISC is working on control and perception utilizing neuromorphic computer chips and sensors. These function like neurons, carrying information in the form of spike trains instead of digital signals. Neuromorphic components only pay attention when a signal spikes or exceeds a certain threshold. Information below the threshold is ignored, similar to how human brains and retinas work.

“Biological brains and eyes use a lot less power than machines do,” Ferrari explains. “By using these spike mechanisms, you can greatly reduce power consumption for machines. People have fabricated some of these devices, but no one has used them for control yet. We are one of the groups working on this application. The RoboBee has to have an onboard controller to fly autonomously, and it has to also include perception because it has to sense its environment both for control and navigation. We design the theory and algorithms that allow these neuromorphic components not only to process information but also to make decisions that control the bee autonomously.”

“People have fabricated some of these devices, but no one has used them for control yet. We are one of the groups working on this application. The RoboBee has to have an onboard controller to fly autonomously.”

Measuring Air Pollution

In another project for the Environmental Protection Agency (EPA), Ferrari collaborates with Cornell colleague John D. Albertson, Civil and Environmental Engineering, to design the path planning and control of autonomous sensors for the EPA’s Geospatial Measurement of Air Pollution—Remote Emissions project. The sensors are installed on special EPA trucks that are used by the agency to monitor and localize emissions of dangerous gases such as methane. The batteries that power the sensors only last a matter of hours, so the EPA is concerned with finding the fugitive methane leaks as efficiently as possible.

“Because all these systems are expensive and require energy, often the first approach is to drive them around randomly and hope to get as much data as possible,” Ferrari says. “Our role is to develop path planning and algorithms that essentially tell the vehicle where to go to get the best measurements.” Instead of driving aimlessly, when the vehicle finds a spike in gas emissions, it uses wind direction to help formulate a theory of the origin of the leak and to plan a path to the next best place to obtain more measurements, so it can localize the leak with higher certainty, Ferrari explains. Right now the EPA trucks are driven by human drivers, but one day they may be fully automated.

Looking for Explosive Devices in the Ocean

Automation is a priority for the armed services, and Ferrari works with the United States Navy on multiple projects connected to Autonomous Underwater Vehicles (AUVs). One project centers on an AUV known as a Remote Environmental Monitoring Unit System (REMUS). Resembling a small submarine, REMUS has communications and sonar onboard and is fully autonomous. It is designed to look for explosive devices in the ocean or other large bodies of water. Once REMUS finds a mine, it identifies the mine’s precise location so it can be neutralized. “We are developing pathfinding algorithms for REMUS that take into account ocean currents,” Ferrari says. “To make a mission last as long as possible, we exploit knowledge of the currents. By using that information, we can plan the path of the vehicle so it uses the least energy possible while also finding the most effective route for localizing the mines. It’s analogous to the EPA problem, except with sonar.”

Ferrari and her collaborator, Tom A. Wettergren, senior scientist with the United States Navy, are now writing a book about the control methods they’ve developed for REMUS and other autonomous vehicles. The book, Information-Driven Planning and Control (CRC Press, 2018), covers the body of theory and algorithms. “It’s a unifying theory for all types of sensors so we talk about many different types and applications,” Ferrari says.

Ferrari is constantly fielding calls from governmental agencies and corporations interested in her work. “Right now we’re talking to Boeing,” she says. “I recently realized we can use a lot of what we’re doing to develop autonomous aircraft taxying, which is something Boeing’s interested in. I have so many projects, there’s always another new one.”