Unique-Eye.jpg

Fred Schneider tells us that although cybersecurity is a technical problem, expecting the best solution to be solely technical is not reasonable.
Beatrice Jin/Frank DiMeo
Beatrice Jin/Frank DiMeo

cybesecurity.jpg

“We don’t avoid crime in the United States by using only technical means, and we shouldn’t expect to address all of our cybersecurity problems by employing technical means alone. An analogy with bank robberies makes the point.”
Beatrice Jin
Beatrice Jin

Schneider-Collage.jpg.jpg

“I’m not talking about solving the cybersecurity problem. I’m talking just about improving things. That’s the best we should expect...We’re never going to solve all the health problems...Cybersecurity is going to be the same way.”
Beatrice Jin/Frank DiMeo
Beatrice Jin/Frank DiMeo

Schneider-Pano.jpg

Schneider says every time we deploy systems in a new setting, we create new opportunities for attacks, and we need new defenses. “We’re now talking about controlling the electric power grid…using networked computers…once we used networked computers only for file sharing and email.”
Beatrice Jin/Frank DiMeo
Beatrice Jin/Frank DiMeo

Schneider-Collage_edited-1.jpg

Cybercriminals succeed because a system was built with a flaw in it. They get in because there was a crack—a flaw in the implementation or in the design. It’s a lock on a door, but somebody forgot there was a window, says Schneider.
Frank DiMeo
Frank DiMeo

A Unique Eye on Cybersecurity

by Tina Snead

Fred B. Schneider, Computer Science, has had his eye on cybersecurity for more than three decades. His ideas about how to keep the cyberworld secure, however, are not what you might expect. In a conversation with Schneider, he gives his views on the solution and how he got there.

This conversation is the first in a three-part series in which we explore Cornell’s approach to a science of cybersecurity.

Cornell Research: Cybersecurity is big. It affects just about all of us, around the globe. What’s your take on it?

Schneider: I’m not worried about defending against today’s attacks. That’s a game we’re guaranteed to lose, because an attack has to succeed before anybody pays attention to building a suitable defense. So we’re always playing catch-up with the attacker. Instead, I’ve been trying to identify a science that underlies computer security. For example, physics is the science that underlies mechanical engineering. Engineers apply physical laws when solving particular problems. The laws inform what engineers can do, what they can’t do, and how much it’s going to cost. But we don’t have those laws for cybersecurity. All we have is evidence of particular vulnerabilities—or holes—that hackers exploit, and once the vulnerability is exposed, we talk about how we might patch it.

Will you explain your approach to cybersecurity?

I focus on fundamental questions, such as what class of attacks will some given class of defenses repel? What class of attacks can we never defend against, no matter what new technology is developed? What’s the intrinsic cost of defending against certain kinds of attacks or certain kinds of attackers? What classes of policies can we enforce by using a given class of defense mechanism?

Our work at Cornell is distinguished by its search for principles and abstractions. We want to cover the whole landscape, and we strive to give connections between abstractions, so one thinks about things at a higher level.

I take an unusually broad view. We don’t avoid crime in the United States by using only technical means, and we shouldn’t expect to address all of our cybersecurity problems by employing technical means alone. An analogy with bank robberies makes the point. We put burglar alarms and surveillance cameras in banks. That technology is not enough to prevent robberies—just read the newspapers. But it’s rare for a bank robber to succeed in the end, because of the nontechnical means we also put in place: police catch the robber, aided by the surveillance tapes, and courts convict the robber and impose penalties.

Cybersecurity won’t have a purely technical solution either. Policy and regulation will be essential elements. With that observation in mind, I’ve been investigating legal and regulatory means, working jointly with Deirdre Mulligan at University of California, Berkeley. We’ve been developing a doctrine to serve as the basis for regulatory solutions to cybersecurity problems. One thing that’s clear: legislation must incentivize individuals, institutions, and governments to make larger investments in cybersecurity. Additional investments are the only way we might change the playing field in favor of the defenders.

This seems like an unusual tactic for cybersecurity.

Yes, it’s tempting to believe that problems caused by technology must be solved using technology. Many researchers are looking at specific ways of breaking into systems so they can then contemplate protection against those attacks. Not me. Instead, I’m looking at a bigger picture: how to get individuals to do the right thing through regulatory and economic incentives, and what’s the scientific basis for technical solutions that we deploy, so we can predict where they will be effective and where they won’t.

Notice, I’m not talking about solving the cybersecurity problem. I’m talking just about improving things. That’s the best we should expect. An analogy with medical science is instructive. We’re never going to solve all the health problems. We may cure one or another disease, but some other disease becomes our concern. We’re always going to be worried about making our health better. Cybersecurity is going to be the same way.

“Cybersecurity won’t have a purely technical solution. . . . Policy and regulation will be essential elements. With that observation in mind, I’ve been investigating legal and regulatory means, jointly with Deirdre Mulligan at University of California, Berkeley.”

We are using computing systems for new things every day. New uses require new functionality, and that functionality invariably admits new attacks. For instance, we’re now talking about controlling the electric power grid by using networked computers, where once we used networked computers only for file sharing and email. Delaying the delivery of an email by a few seconds is not very dangerous, so that timeliness was not worthy of a defense; but delaying the delivery of a control signal in the power grid could cause a blackout—a defense of timeliness is now necessary.

Our needs are a moving target, and solving yesterday’s cybersecurity problem isn’t good enough for tomorrow’s systems. The only hope we have, instead of being reactive, is to have a science base that informs how today’s deployments will continue to work in plausible futures.

What do we have to fear most in cybersecurity? 

Let’s talk about the threat. There are high-end attackers who are well financed, and there are those at the low end, who are not. The United States buys and depends on military weapons systems that employ networked computers and satellites for communications. If some nation makes large investments to compromise those computing systems, then our high-tech national security apparatus is no longer useful. A worst case would be to make our systems misbehave just enough for soldiers to lose faith in correct operation. Moreover, once we think that our defenses might not work, our nations’ leaders would have to be less bold and more inclined to accept compromises. So the high-end threat is quite worrisome.

And the low-end threat?

An example of the low-end threat is where somebody steals your identity—starts acting as you—and makes purchases using a bank or a credit card. You’ll get the money back, but it will take many months for you to undo damage to your reputation. Also, your credit rating will be destroyed, and rebuilding that will be hard. In short, your world is turned upside down. Inconvenience and pain. New regulations could go a long way in addressing these problems.

Another aspect worth mentioning is privacy. What should it mean, now that data are ubiquitous and widely collected? Are we even entitled to privacy anymore? In the 1700s, when we all lived in small towns and there was a town square, not much was secret from village residents. You had no privacy. So privacy, as we view it today, is a recent phenomenon. We, as a society, need to come to terms with an appropriate notion of privacy for our networked age, just as we had to revisit our views about privacy when telephones were first deployed, because wiretapping became possible, and when photography was introduced, because anyone could record and publicize our actions.

How does this recent phenomenon, privacy, affect security?

Surveillance contributes to security, because it enables deterrence through accountability and because it allows us to deploy defenses only in the places that we have learned will be attacked. Government wiretapping has been driven by this desire for implementing security. Privacy here is the antithesis of surveillance. But there is a spectrum of choices; our nation needs to start talking about them and the trade-offs, so we can establish norms and regulations.

Cornell computer science research has become quite security focused and distinguished for it. What has been the impetus for this distinction?

In the 1980s, Cornell’s CS [Computer Science] department studied distributed systems. Researchers at other universities focused on performance questions for these systems. But it was clear to us that an important piece of the picture was being ignored—providing service, despite failures of the individual processors and communications links. A distributed system that was unusable because one piece failed was clearly not an acceptable proposition, no matter how fast it might run when operational.

So, my colleagues and I focused on how to design distributed systems that would be fault-tolerant. Instead of being driven by a quantitative dimension such as performance, we turned our attention to a qualitative one—tolerating failures. And we made some groundbreaking contributions, which made Cornell an exciting place to be a faculty member or a graduate student. The Cornell group was—and still is—seen as the best distributed systems group in the world, in part because addressing fault-tolerance is now widely seen as a central problem.

And from there?

Fault-tolerance is only one dimension of building trustworthy systems—systems that behave as expected but don’t do anything that isn’t expected, for example, in response to failures and attacks. The Cornell CS department decided to broaden our faculty to include researchers interested in computer security. We interviewed a number of people but didn’t find the right fit. At the time, I was helping the National Academies on a study about system trustworthiness; we ultimately produced the book Trust in Cyberspace. I was the fault-tolerance guy; the rest of the team included some of the best security guys in the country. One morning, when contemplating our department’s recruiting effort, I asked myself: “Well, why don’t I start looking at cybersecurity problems?”

I had done computer security research as a graduate student but had then drifted away into other areas of study. I realized what a great time this would be to switch into computer security research, because I had regular access to the best people in the country. I used the National Academies study as an opportunity to apprentice myself, and I came up to speed on what was then the leading edge in security.

A few years later, computer security became a hot research area in CS. As other faculty began to move into the area, Cornell hired more. At Cornell CS, our culture has been to explore solutions first, rather than first building artifacts, as a means to discover problems. This principled view has distinguished Cornell’s CS department on a national scale, as well as shaped our views about the kind of people we aspire to hire.

Who are some of the faculty on the Cornell cybersecurity team?

The first security faculty member we hired was Andrew Myers [Computer Science]. He asks the question: How can we design programming languages for which all programs you write must be inherently secure? What you read in the newspapers about attacks on today’s systems would be impossible for systems written in his languages. Most of today’s system vulnerabilities, which are created because programmers don’t think clearly about one or another corner case, would be nonexistent in those systems. He has also used his experience to identify important general properties of languages and security properties.

Others?

Gün Sirer [Computer Science], who was hired somewhat later, is more engineering-driven, and thus he broadened the department in very important ways. The engineering he does, though, is founded on novel principles and abstractions he’s discovered. For the past decade or so, he has been looking at systems that would make it easy to build security. He worked on a new operating system design that used credentials for controlling access to resources in a similar way to how we use ID cards and keys as credentials to access physical spaces.

More recently, Gün and Cornell postdoctoral researcher Ittay Eyal have been working on crypto-currencies based on blockchains. They discovered a highly publicized flaw in the Bitcoin currency systems and then another in the DAO [decentralized autonomous organization], which is built using a successor to Bitcoin. These are not vulnerabilities that arise because of simple programming errors; they are the result of subtle but foundational misconceptions in the system design.

Cornell also has Elaine Shi who just joined the department in Ithaca, and Vitaly Shmatikov, Tom Ristenpart, and Rafael Pass [Computer Science] at Cornell Tech in New York City, as well as Ari Juels at the Jacobs Institute, which is the Technion part of Cornell Tech. These folks are working on a broad range of topics: privacy, cryptography, and crypto-currencies.

You were chief scientist on a grant that recently ended—a 10-year, $5 million-per-year National Science Foundation grant that established a Science and Technology Center on security called TRUST. What is one of the most important outcomes of the center?

Yes, the grant was to University of California at Berkeley, Stanford, Carnegie Mellon, Vanderbilt, and Cornell. TRUST researchers produced an extraordinary list of technical accomplishments. I’m unable to give a full list right now, but I’ll mention the two themes that were guiding all the work. For the first five years, the center explored research that married technological solutions with policy solutions. Our center embraced the thesis: technology that ignored policy was likely to be deemed irrelevant for lack of deployment incentives; policy devised ignorant of technology was likely to be infeasible. My work with Mulligan on cybersecurity doctrine had its start there. The center’s second five-year period focused on exploring a science base for cybersecurity. Our center was the first large-scale push along this frontier, and I’m happy to report that our success inspired the National Security Agency and Department of Defense to create significant science of security research programs with hopes of engaging a larger community.

In upcoming conversations, we’ll look at how safe is your money, particularly if you use Bitcoin, and how a Cornell computer scientist builds security and reliability into computer languages.