Elizabeth Nelson; Provided
Elizabeth Nelson; Provided

Better Hardware, Better Computer Security

by Jackie Swift

Modern life depends on computers, which are vulnerable to hacking. Malware warnings show up regularly in the news, and we routinely install security patches for our computer operating systems and software applications. But these types of fixes focus on software without addressing the vulnerabilities of the hardware that provides the foundation for the entire computer system. “For years, people largely thought that security was a software issue,” says G. Edward Suh, Electrical and Computer Engineering. “Most of the malicious attacks were done in the software layer, so computer experts saw security as a question of how to write more secure software.”

That all changed over the past 10 years or so as software security issues became more difficult to exploit. By the time hackers turned their attention to other ways to compromise computer systems, including the hardware, Suh was already looking into how to stop them. “Hardware provides a foundation for the system to be secure,” he explains. “At the same time, if you have a bug in the hardware that can be exploited, that can often break any protection in the software layers because the software runs on top of the hardware.”

Addressing Microprocessor Vulnerabilities

Suh works in the intersection between computer architecture and computer security. He addresses the twin questions of how to build hardware that is fundamentally more secure and how to use hardware to ensure the security of the entire computer system. In collaboration with Andrew C. Myers, Computer Science, he has been developing a new microprocessor that addresses vulnerabilities recently discovered in virtually all current microprocessors. By exploiting these vulnerabilities, attacks such as Meltdown and Spectre, for example, leak sensitive information through micro-architecture timing channels—the amount of time it takes a computer to carry out particular operations.

Common performance optimization techniques, which have been designed to improve computer processing speed, allow for these types of vulnerabilities. In one such technique, called speculative execution, the processor predicts what behavior will be needed in the future and runs it before it is needed. Often the predicted behavior is correct and by predicting it, processor speed is increased. In another approach, the processor relies on out-of-order execution—that is, running aspects of the program in an order that is not the program order. Meltdown and Spectre take advantage of these techniques to allow a malicious party to use software to access secret or sensitive data that should be secure.

“If you have a bug in the hardware [system] that can be exploited, that can often break any protection in the software layers because the software runs on top of the hardware.”

“Meltdown and Spectre are a violation of information flow policy,” Suh says. “To address it, we designed a processor where the interface between hardware and software has constraints on information flow. We also created a new language for designing hardware that allows the designer to check the timing channel. When you design hardware in this way, you have a mathematical guarantee that it doesn’t have any illegal information flow, such as potentially leaking secrets.”

Critical Security for Self-Driving Cars

Suh and Myers are now working on extending the design language to handle much more complex processors, such as those in fast servers, and to provide strong security for safety-critical systems such as drones and medical devices. As part of this work, they have joined with Mark Campbell, Mechanical and Aerospace Engineering, to develop hardware security for safety-critical systems in self-driving cars.

“We wanted to prevent remote attacks where someone could log into a car’s controller, take over, and control its driving capabilities,” Suh says. “So we separated out the complex, infotainment and self-driving components and put the safety-critical components, like collision avoidance, into a trusted, hardware-protected, secure environment.”

The researchers were able to create simple robotic demonstrations of their hardware security in action, and they continue to work on refining it.

Building a High-Security AI Accelerator

In another project, Suh joined with Amit Lal and Zhiru Zhang, both Electrical and Computer Engineering, to build an artificial intelligence (AI) accelerator for collaborative machine-learning situations. Lal’s interest in creating a machine-learning model for optimizing semiconductor chip-building requires that he convince the many companies involved in each step of the process to provide sensitive data to be used to train the AI.

“Each company owns a different piece of the process,” Suh says. “The goal is to build a machine-learning model that predicts how the quality of your chip design will change if you tweak certain parameters. To do that, you need data from all the participants, and they are reluctant to share. The problem is, how do you guarantee that data provided to build the model will not be copied, that a company’s secrets will not be accessed by competitors and others involved in the model?”

Lal’s conundrum provided the catalyst for Suh and Zhang to try to create a custom hardware accelerator that is both high performance and high security. “Rather than having to trust the owner of the computer or the computer system itself, our accelerator will be the only piece you need to trust,” Suh explains. “It will have a protection mechanism, a unique secret key. Data will be sent to the machine-learning model in an encrypted form, and only the hardware can decrypt it. The software running the system or the owner of the system cannot extract the data or the model itself.”

The researchers have succeeded in creating an initial version of a secure, deep-learning, convolutional neural-network accelerator. They are now looking into how to extend that design to apply to a collaborative setting, as well as how to deal with the potential of an adversarial example—an intentionally created bad example that would be misclassified by the machine-learning model and thus sabotage the model. For example, if the machine-learning model is used for a self-driving car, a malicious actor might manage to change a stop sign so that the AI recognizes it as a speed sign instead. “The implications are serious,” Suh says.

“When I got into hardware security originally, I thought it would become increasingly important,” he continues. “The potential impact of a compromised computer was growing fast, and today the area of hardware security is a major part of both computer architecture and computer security research. It was almost nonexistent when I started.”