Collaborative Robotic Surveillance

For a future where teams of autonomous robots, or agents, can operate and collaborate in unstructured conditions, the world needs sophisticated sensing, localization, and mapping technologies. Missions such as intelligence gathering, surveillance, reconnaissance, security monitoring, and situation awareness all require rapid processing of spatio-temporal information. By recognizing the presence of streets, trails, or narrow passages between buildings, for example, agents can plan a path that safely supports various objectives. Agents can detect suspicious actors and activities that might otherwise go unnoticed by interpreting the complicated spatial layout and context of a scene. Agents must be capable of extracting scene features at a level of detail suited to the task, but manageable, in order to operate in real time. In any surveillance problem, a large amount of redundant and task-irrelevant data is obtained.

Silvia Ferrari and Mark Campbell, Sibley School of Mechanical and Aerospace Engineering, with Kilian Q. Weinberger, Computer Science, are meeting these needs by leveraging their complementary expertise and recent developments in computer vision, machine learning, and decentralized sensor planning and estimation. The unified framework they developed will enable agents to access and rapidly process large amounts of data and videos, such as datasets available on the web, while also extracting mission-relevant information from local video frames. The framework will be able to fuse video data obtained with different viewpoints and changes in appearance, scale, illumination, and focus from static cameras as well as mobile observers. With the ability to update its world view with new, incoming information, the system has the potential to vastly improve surveillance, security, and reconnaissance.

Cornell Researchers

Funding Received

$1.7 Million spanning 4 years

Sponsored by

Other Research Sponsored by United States Department of Defense, Office of Naval Research