While research in autonomous driving has made great strides in recent years, fully autonomous cars are still a distant goal, primarily because of a lack of robustness. Current autonomous cars can’t drive on new roads or roads that have changed substantially, such as after an earthquake or hurricane. Autonomous cars also don’t know what to do if there’s a GPS or data outage such as, in parking garages, urban cities, and tunnels. Importantly, humans are good at all of this. They can drive without detailed maps or high precision sensors, typically require only a small amount of information for guidance, and their performance generally gets better over time through learning.
Using the intelligent human driver as a guide, Mark Campbell, Mechanical and Aerospace Engineering, and Kilian Q. Weinberger, Computer Science, are developing algorithms that can perceive and make predictions about a scene in real time with measurable confidence, particularly as the scene is closer to the car. New robustness characteristics in Campbell and Weinberger’s sights include the ability to detect and overcome mistakes, both in the near term (real time) and long term (learning). Their goal is to design the algorithms in a way that enables an inherent robustness, giving autonomous cars the ability to gauge and respond to dynamic scenarios and learn from those scenarios like a human driver would. The team is also integrating the algorithms into Cornell’s autonomous car software framework and validating the components and system in a series of experimental scenarios to allow for their faster adoption by the community.