Read Cybersecurity and Cyberwar Online

Authors: Peter W. Singer Allan Friedman,Allan Friedman

Cybersecurity and Cyberwar (7 page)

The entire problem was perhaps best illustrated by one of the most cited cartoons in history. In 1993,
New Yorker
magazine published a drawing by Peter Steiner of two dogs sitting near a computer. One dog tells the other, “On the Internet, nobody knows you're a dog.”

Yet this isn't to say that people can't find out private details about you if they want. Every activity on the Internet is data being routed from an Internet Protocol (IP) address. As we saw in the prior section, an IP address is a numerical label that is assigned to an addressable connection in the Internet. For most consumers, the IP address is not permanently assigned to their device. Instead, the IP address will be dynamic. The consumer's Internet service provider will assign an IP address for a period of time, but it might be reassigned to someone else after the consumer disconnects. However, if an Internet service provider retains the relevant data, it is able to correlate the IP address at a specific date and time to a particular service subscriber.

An IP address is not in and of itself information about an identifiable individual. But it can provide some information about the geographic location and the means by which that individual accesses the Internet. It is the potential for how the IP address can be combined with other information (or could be reasonably combined with other information) that has privacy advocates concerned. If you can combine enough of this online and offline information, you might have enough data to make a high-probability guess about who was doing what and where. For instance, in the 2012 scandal that enveloped CIA director General David Petraeus, the FBI was able to backtrack the anonymous sender of a series of threatening e-mails to the business center of a hotel that his mistress turned out to be staying at.

The information gathered about identity is not the same as proof of identity. Relying on the IP address would be like relying on license plates to identify drivers. A sophisticated user can easily hide or disguise her IP address by routing her activities through another point
on the Internet, making it appear that that node was responsible for the original traffic. There are, however, many other types of data that can be collected that are harder to hide. Even the patterns of how individual users browse and click through a website can be used to identify them.

This question of how
can
we identify and authenticate online activities is a different question from how
should
we. You may not have wanted your Social Security number to be revealed at a party. Or that dog might have preferred its identity remain secret, at least until the two of you had gone out on a few more online dates.

For the purposes of cybersecurity, the bottom line is that digital identity is a balance between protecting and sharing information. Limiting acquired information is not only good for privacy, it can prevent others from gaining information for more sophisticated authentication fraud. At the same time, each system has incentives to maximize the amount of data it collects, as well as use that data for its own goals.

What Do We Mean by “Security” Anyway?

There's an old joke in the security industry about how to secure any computer: Just unplug it.

The problem is not only that the joke is becoming outdated in an era of wireless and rechargeable devices, but once a machine is plugged in, there are practically an infinite number of ways its use might deviate from its intended purpose. This deviation is a malfunction. When the difference between the expected behavior and actual behavior is caused by an adversary (as opposed to simple error or accident), then the malfunction is a “security” problem.

Security isn't just the notion of being free from danger, as it is commonly conceived, but is associated with
the presence of an adversary
. In that way, it's a lot like war or sex; you need at least two sides to make it real. Things may break and mistakes may be made, but a cyber problem only becomes a cybersecurity issue if an adversary seeks to gain something from the activity, whether to obtain private information, undermine the system, or prevent its legitimate use.

To illustrate, in 2011 the Federal Aviation Administration ordered nearly half of US airspace shut down and more than 600 planes grounded. It seemed like a repeat of how American airspace was shut down after the 9/11 attacks. But this incident wasn't a security issue, as there was no one behind it. The cause was a software glitch in a single computer at the Atlanta headquarters building. Take the same situation and change the glitch to a hack: that's a security issue.

The canonical goals of security in an information environment result from this notion of a threat. Traditionally, there are three goals: Confidentiality, Integrity, Availability, sometimes called the “CIA triad.”

Confidentiality refers to keeping data private. Privacy is not just some social or political goal. In a digital world, information has value. Protecting that information is thus of paramount importance. Not only must internal secrets and sensitive personal data be safeguarded, but transactional data can reveal important details about the relationships of firms or individuals. Confidentiality is supported by technical tools such as encryption and access control as well as legal protections.

Integrity is the most subtle but maybe the most important part of the classic information security triumvirate. Integrity means that the system and the data in it have not been improperly altered or changed without authorization. It is not just a matter of trust. There must be confidence that the system will be both available and behave as expected.

Integrity's subtlety is what makes it a frequent target for the most sophisticated attackers. They will often first subvert the mechanisms that try to detect attacks, in the same way that complex diseases like HIV-AIDS go after the human body's natural defenses. For instance, the Stuxnet attack (which we explore later in
Part II
) was so jarring because the compromised computers were telling their Iranian operators that they were functioning normally, even as the Stuxnet virus was sabotaging them. How can we know whether a system is functioning normally if we depend on that system to tell us about its current function?

Availability means being able to use the system as anticipated. Here again, it's not merely the system going down that makes availability a security concern; software errors and “blue screens of
death” happen to our computers all the time. It becomes a security issue when and if someone tries to exploit the lack of availability in some way. An attacker could do this either by depriving users of a system that they depend on (such as how the loss of GPS would hamper military units in a conflict) or by merely threatening the loss of a system, known as a “ransomware” attack. Examples of such ransoms range from small-scale hacks on individual bank accounts all the way to global blackmail attempts against gambling websites before major sporting events like the World Cup and Super Bowl.

Beyond this classic CIA triangle of security, we believe it is important to add another property: resilience. Resilience is what allows a system to endure security threats instead of critically failing. A key to resilience is accepting the inevitability of threats and even limited failures in your defenses. It is about remaining operational with the understanding that attacks and incidents happen on a continuous basis. Here again, there is a parallel to the human body. Your body still figures out a way to continue functioning even if your external layer of defense—your skin—is penetrated by a cut or even bypassed by an attack like a viral infection. Just as in the body, in the event of a cyber incident, the objective should be to prioritize resources and operations, protect key assets and systems from attacks, and ultimately restore normal operations.

All of these aspects of security are not just technical issues: they are organizational, legal, economic, and social as well. But most importantly, when we think of security we need to recognize its limits. Any gain in security always involves some sort of trade-off. Security costs money, but it also costs time, convenience, capabilities, liberties, and so on. Similarly, as we explore later on, the different threats to confidentiality, availability, integrity, and resiliency each require different responses. Short of pulling the plug, there's no such thing as absolute security.

What Are the Threats?

It sounds odd that reporters took a passenger jet to Idaho just to watch a cyberattack, but that's exactly what happened in 2011.

Worried that the public did not understand the magnitude of growing cyberthreats, the Department of Homeland Security
flew journalists from around the country to the
Idaho National Laboratory
. Only four years earlier, the INL, an incredibly secure and secretive facility that houses a Department of Energy's nuclear research facility, had conducted a top-secret test to destroy a large generator via cyberattack. In 2011, in an effort to raise awareness about cyberthreats, government experts not only declassified a video of the 2007 test, but held a public exercise for journalists to “watch” a faked cyberattack on a mock chemical plant. The government wanted to show that even their own experts couldn't prevent a team of hired hackers (known as a “red team”) from overwhelming the defenses of a critical facility.

This episode is a good illustration of how those who professionally think and talk about cybersecurity worry that their discussions of threats are ignored or downplayed. Frustrated, the result is that they resort to turning the volume up to the proverbial 11 à la
Spinal Tap
, conducting outlandish exercises and only talking about the matter in the most extreme ways that then echo out into the media and public. Indeed, following a series of warnings by US government officials, by 2013 there were over half a million online references in the media to “
cyber Pearl Harbor
” and another quarter million to a feared “cyber 9/11.”

The complacency these experts worry about stems in part from our political system's reluctance to address difficult, complex problems in general, and cybersecurity in particular. But this kind of tenor also feeds into a misunderstanding of the threats. For example, three US senators sponsored a large cybersecurity bill in the summer of 2011, and so wrote an op-ed in the
Washington Post
urging support for their legislation. They cited a series of recent, high-profile attacks, including those against the Citigroup and RSA companies and the Stuxnet worm's attack on
Iranian nuclear research
. The problem is that these three cases reflected wildly different threats. The Citigroup attack was about financial fraud. The RSA attack was industrial theft, and Stuxnet was a new form of warfare. They had little in common other than they involved computers.

When discussing cyber incidents or fears of potential incidents, it is important to separate the idea of vulnerability from threat. An unlocked door is a vulnerability but not a threat if no one wants to enter. Conversely, one vulnerability can lead to many threats: that unlocked door could lead to terrorists sneaking in a bomb,
competitors walking out with trade secrets, thieves purloining valuable goods, local hooligans vandalizing property, or even cats wandering in and distracting your staff by playing on the keyboards. The defining aspects of threats are the actor and the consequence.

The acknowledgment of an actor forces us to think strategically about threats. The adversary can pick and choose which vulnerability to exploit for any given goal. This implies that we must not only address a range of vulnerabilities with respect to any given threat, but also understand that the threat may evolve in response to our defensive actions.

There are many kinds of bad actors, but it is too easy to get lulled into using media clichés like “hackers” to lump them all together. An actor's objective is a good place to start when parceling them out. In the variety of attacks cited by the senators above, the Citigroup attackers wanted account details about bank customers with an
ultimate goal of financial theft
. In the attack on RSA, the attackers wanted key business secrets in order to spy on other companies. For Stuxnet (a case we'll explore further in
Part II
), the attackers wanted to disrupt industrial control processes involved in uranium enrichment, so as to sabotage the Iranian nuclear program.

Finally, it is useful to acknowledge when the danger comes from one of your own. As cases like Bradley Manning and WikiLeaks or Edward Snowden and the NSA scandal illustrate, the “insider threat” is particularly tough because the actor can search for vulnerabilities from within systems designed only to be used by trusted actors. Insiders can have much better perspectives on what is valuable and how best to leverage that value, whether they are trying to steal secrets or sabotage an operation.

It is also important to consider whether the threat actor wants to attack
you
, or just wants to attack. Some attacks target specific actors for particular reasons, while other adversaries go after a certain objective regardless of who may control it. Untargeted malicious code could, for example, infect a machine via e-mail, search for stored credit card details of anyone, and relay those details back to its master without any human involvement. The key difference in these automated attacks is one of cost, both from the attacker's and the defender's perspective. For the attacker, automation hugely reduces cost, as they don't have to invest in all the tasks needed, from selecting the victim to identifying the asset to coordinating the
attack. Their attack costs roughly the same no matter how many victims they get. A targeted attack, on the other hand, can quickly scale up in costs as the number of victims rises. These same dynamics shape the expected returns. To be willing to invest in targeted attacks, an attacker must have a higher expected return value with each victim. By contrast, automated attacks can have
much lower profit margins
.

Other books

Heart Echoes by Sally John
Summer Sizzle by Samantha Gentry
Let the Devil Sleep by John Verdon
Cherish by Catherine Anderson
The Engines of the Night by Barry N. Malzberg
A Quiet Adjustment by Benjamin Markovits
Freak the Mighty by Rodman Philbrick


readsbookonline.com Copyright 2016 - 2024