Using game theory to protect networks, infrastructure
- By Patrick Marshall
In June 2013, U.S. officials disclosed that unnamed parties -- later determined to be Chinese hackers -- had stolen the designs for the F-35, America’s next-generation fighter jet. Eventually, cybersecurity experts determined that it wasn’t a military computer that had been hacked, but rather a private-sector contractor’s network. The breach resulted in a one-year delay in the F-35 program and a 50 percent increase in its cost, since much of the software and design details had to be redone. Plus, China’s military aircraft development program got a big boost.
Federal contracts, of course, specify contactors ensure their networks are secure, but in practice security falls short either because the contractor fails to rigorously meet the requirements or because the requirements overlooked a vulnerability.
To better understand the human decision-making that too often results in insecure systems, researchers at Purdue University are applying game theory to security in computer networks, power grids and other infrastructure.
According to Shreyas Sundaram, team leader and assistant professor in Purdue’s School of Electrical and Computer Engineering, the three-year project is focused on combining the tools of two fields -- game theory and behavioral economics -- to better understand how people make decisions about protecting sensitive assets from hackers, whether data on a network or a power grid.
Game theory, Sundaram said, makes predictions based on people's tendency to make decisions based on “utility functions,” their perception of risk and potential benefits. “Typically, the benefits to the players depend not only on the actions they are taking but on the actions that other people are taking as well,” he said.
At the same time, Sundaram said, behavioral economics and behavioral psychology have shown that humans deviate from these classical models of decision-making very systematic ways.
In classical models of game theory, people make certain predictions based on the classical utility functions, he said. But, according to behavioral economics, these utility functions don't necessarily capture the ways that humans actually make their decisions, he said. We wondered "what would happen if we started to bring some of these models into a game-theoretic framework.”
Got it? Neither did I, at least at first.
“If we're protecting the same system, and you are investing a lot, then maybe I don't need to invest as much because you've done the hard work for me,” Sundaram explained. And how much the players invest in protecting their systems depends on how they assess the risk and the value to be gained from spending on protections.
It turns out, however, that humans are not very accurate in assessing risks. That’s where behavioral economics comes in.
“Humans tend to overweight low-probability events and underweight high-probability events,” Sundaram said. Armed with that knowledge, an analyst can weigh the perceived risks at each node in an interconnected network and compare those to the actual risks. Having to assign a probability to each of those nodes gets you to look at those vulnerabilities, he said. “You could then look at ways to strengthen those links.”
Sundaram’s partner in the project is Timothy Cason, a professor of behavioral economics at Purdue.
By combining game theory and behavioral economics, Sundaram said, “we are developing techniques and tools that would specify these kinds of interconnected systems, model losses and probabilities … and spit out the investments that the decision-makers would make if they were optimizing with a clear view of the actual attack probabilities.”
The researchers are also employing game theory and behavioral economics to develop more effective tools to incentivize others -- such as the contractor responsible for the hacking of the F-35 designs -- to make better security decisions.