Read Cybersecurity and Cyberwar Online

Authors: Peter W. Singer Allan Friedman,Allan Friedman

Cybersecurity and Cyberwar (9 page)

This kind of power, and the fact that such attacks are fairly overt and obvious, means that DDoS attacks are often linked to some other goals. Criminal gangs may go to a website and threaten to take it offline unless they pay for “protection.” (“That's a nice website you've got there. It'd be a real shame if anything were to … happen.”) Or they may also be used as a diversion, to overwhelm the attention and defenses of the victim while going after data elsewhere. They are also increasingly common as a form of political protest or even suppression. During the initial stages of the crisis in Syria in 2011,
supporters of the Syrian regime
shared DDoS tools to attack critics of the government and news organizations that covered the growing violence.

The bottom line is that vulnerabilities exist on every type of information system in cyberspace. Despite how scary they sound, many of them are not new. For instance, the common buffer overflow attack was first developed in the 1970s and almost brought down
the adolescent Internet in the 1980s
. By 1996,
a detailed how-to guide
appeared in a hacker's magazine.

As threats evolve, so too must our responses to them. Some can be mitigated with small changes in behavior or tweaks in code, while whole classes of vulnerabilities can be prevented only by developing and implementing new technologies. Other vulnerabilities are simply a structural consequence of how we use systems. As we explore in
Part III
, how we navigate these challenges comes down to accepting that bad guys are out to exploit these vulnerabilities and then developing the best possible responses that allow us to keep benefiting from the good parts of the cyber age.

Or you could opt out, sell your advanced luxury car and ride the bus instead. Just make sure you don't check the bus schedule online.

How Do We Trust in Cyberspace?

There is perhaps no more important duty for a citizen than to vote, and no part of the voting process is more important than preserving the integrity of that vote. And yet the Founding Fathers of American democracy didn't imagine a world of computerized voting machines, nor one in which Pac-Man might chomp his way into an election.

The incident started out as one of those fun projects that hackers like to undertake, spending a few afternoons playing around with an old piece of computer equipment. The difference here was the hardware in question was an AVC Edge electronic voting machine used in the 2008 elections. Such systems are supposed to be tamper-proof or at least reveal if anyone tries to doctor them. Yet the two young researchers from the University of Michigan and Princeton were able to reprogram the machine without leaving any sign on the
tamper-resistant seals
. While they chose to reprogram the voting machine to innocuously play Pac-Man, the beloved 1980s video game, they made it clear that the supposedly tamper-proof machine was vulnerable to far more insidious attacks.

As the incident shows, our dependence on digital systems means that increasingly we face the question of how we can trust them. For cybersecurity, the users must trust the systems, and the systems must know how to trust the users. Not every machine is going to have a unwanted Pac-Man on the screen to tell us something is wrong. How do we know that the computer is behaving as we expect it to or that an e-mail from our colleague is actually from that colleague? And, just as importantly, how do computers know if we're supposed to be and are behaving the way we're supposed to?

Online trust is built on cryptography—the practice of secure communications that goes all the way back to the first codes that Julius Caesar and his generals used to keep their enemies from understanding their secret messages. We often think of cryptography as a means of keeping information confidential, but it also plays an equally important role in integrity, or the ability to detect any tampering.

A
key building block in cryptography
is the “hash.” A hash function takes any piece of data and maps it to a smaller, set-length output, with two specific properties. First, the function is one-way, which makes it very difficult to determine the original data from the output. Second, and even more important, it is incredibly hard to find two input pieces of data that generate the same output hash. This lets us use the hash function to “fingerprint” a document or an e-mail. This fingerprint can then verify the integrity of a document. If a trusted fingerprint of a document does not match the fingerprint that you generate yourself using the same method, then you have a different document.

Cryptographic integrity checks are useful, but for them to apply to trust, we need some means to introduce identity. Trust is both a noun and a transitive verb, after all; it requires someone or something to trust. Cryptographic digital signatures provide that trust by using “asymmetric” encryption. This explanation is starting to get complex, so it might be useful to take a brief diversion into understanding a few basic points of cryptography.

Modern cryptosystems rely on “keys” as the secret way of coding or decoding information on which trust is built. “Symmetric encryption” relies on sharing the same key with other trusted parties. I encrypt data with the same key that you use to decrypt it. It is like us both sharing the same key for a bank lockbox.

But what if we have never met each other? How will we exchange these secret keys securely? “Asymmetric cryptography” solves this problem. The idea is to separate a secret key into a public key, which is shared with everyone, and a private key that remains secret. The two keys are generated such that something that is encrypted with a public key is decrypted with the corresponding private key, and vice versa.
Figure 1.2
illustrates how public key cryptography works to protect both the confidentiality and the integrity of a message. Suppose Alice and Bob—the classic alphabetical protagonists of cryptographic examples—want to communicate. They each have a pair of keys, and can access each other's public keys. If Alice wants to send Bob a message, she encrypts the message with Bob's public key. Then the only person who can decrypt it must have access to Bob's private key.

A digital signature of a message ties together the notion of a digital fingerprint with public key cryptography. Returning to our friends above, Alice takes the fingerprint of the document and signs it with her private key and passes it to Bob, along with the unencrypted document. Bob verifies the signature using Alice's public key and compares it with a fingerprint he can generate of the unencrypted document. If they do not match, then someone has changed the document in between. These digital signatures can provide integrity for any type of data and can be chained to allow for transitive trust.

But where does the trust come from in the first place? For example, I can verify that software I have downloaded from a company is valid by checking it against the company's public key, but how do I know that the key actually belongs to that company? Remember, a signature only implies access to the private key that corresponds with the public key, not the validity of that public key. Asymmetric cryptography requires some means of trusting the public keys. In most modern systems we rely on a “trusted third party.” These are organizations that produce signed digital “certificates” that explicitly tie an entity to a public key. Known as certificate authorities (CAs), they sign the certificates, and their public keys are known widely enough so that they cannot be spoofed. If you trust the CA, then you can trust the public key signed by the CA.

Figure 1.2

Every person online uses this system on a daily basis, even if we do not realize it. When we visit HTTPS web addresses and get the little lock icon to verify the secure connection, we are visiting a secure website and are trusting the certificate authorities. Our web browsers ask the secure domain for its public key and a certificate signed by a CA, tying the public key explicitly to the Internet domain. In addition to verifying that the server our browser is talking to belongs to the organization it claims to belong to, this also enables trusted communication by exchanging encryption keys. Such trust serves as the basis for almost all secure communication on the Internet between unaffiliated parties.

As the source of trust, certificate authorities occupy a critical role in the cyberspace ecosystem, perhaps too important. If someone can steal a CA's signing key, then the thief (or whoever they pass the key on to) could intercept “secure” traffic without the victim noticing. It is hard to pull off, but it has been done. In 2011, someone (later leaks fingered the NSA) stole
a Dutch CA's keys
and used them to intercept Iranian users' access to Google's Gmail. Some have complained that there are too many CAs around the world, many in countries with less than savory histories in security and privacy. As attacks evolve, the roots of trust will be even more at risk.

If one side of trust online is about the user feeling confident about the system and other users, the other side is how systems should trust the users. After identification and authentication, a system must authorize the user to use the system. Most systems use some kind of “access control” to determine who can do what. At its simplest, access control provides the ability to read, write, or execute code in an operating environment.

The core of any system is the access control policy, a matrix of subjects and objects that defines who can do what to whom. This can be simple (employees can read any document in their small work group, while managers can access any document in their larger division) or much more complicated (a doctor may read any patient's file, as long as that patient has one symptom that meets a prespecified list, but may only write to that file after the billing system can verify eligibility for payment). Good access control policies require a clear understanding of
both organizational roles and the architecture of the information system as well as the ability to anticipate future needs. For large organizations, whose users make extensive use of data, defining this policy perfectly is incredibly difficult. Many believe
it may even be impossible
.

Failures of access control have been behind some of the more spectacular cyber-related scandals in recent years, like the case of Bradley Manning and WikiLeaks in 2010, which we explore next, and the 2013 Edward Snowden case (where a low-level contractor working as a systems administrator at the NSA had access to a trove of controversial and top-secret programs, which he leaked to the press). These cases illustrate poor access control in all its glory, from low-level individuals being granted default access to anything and everything they wanted, to poor efforts to log and audit access (for several months after Edward Snowden went public with leaked documents about its various monitoring programs, the NSA still didn't know how many more documents he had taken, but hadn't yet released).

Whether the organization is the NSA or a cupcake store, the questions about how data is compartmentalized are essential. Unfortunately, most organizations either greatly overprovision or underprovision access, rather than trying to find a good medium. Overentitlements grant too much access to too many without a clear stake in the enterprise, leading to potentially catastrophic WikiLeaks-type breaches. In many business fields, such as finance and health care, this kind of overaccess even runs the risk of violating “conflict of interest” laws that are supposed to prevent individuals from having access to certain types of information. Finally, and most relevant to cybersecurity, if access control is poor, organizations can even lose protection of their intellectual property under
trade secret law
.

At the other extreme, underentitlement has its own risks. In business, one department may inadvertently undermine another if it doesn't have access to the same data. In a hospital, it can literally be a matter of life and death if doctors cannot easily find out information they need to know in an emergency. Former intelligence officials have implied that the stakes are even higher in their world, where a lack of information sharing can leave
crucial dots unconnected
and terrorist plots like 9/11 missed.

What this all illustrates is that even amid a discussion of technology, hashes, and access control, trust always comes back to human psychology and the decisions used to make explicit risk calculations. Pac-Man isn't an actual man, but the system that allowed him to enter a voting machine, and the consequences of that access, are all too human.

Focus: What Happened in WikiLeaks?

bradass87:
hypothetical question: if you had free reign [
sic
] over classified networks for long periods of time … say, 8–9 months … and you saw incredible things, awful things … things that belonged in the public domain, and not on some server stored in a dark room in Washington DC … what would you do? …

(12:21:24 PM) bradass87:
say … a database of half a million events during the iraq war … from 2004 to 2009 … with reports, date time groups, lat-lon locations, casualty figures …? or 260,000 state department cables from embassies and consulates all over the world, explaining how the first world exploits the third, in detail, from an internal perspective? …

Other books

Summer at World's End by Monica Dickens
The Dark Detective: Venator by Jane Harvey-Berrick
The Last Temptation by Val McDermid
Los cipreses creen en Dios by José María Gironella
The Weapon of Night by Nick Carter
All or Nothing by Belladonna Bordeaux
All Dogs are Blue by Leao, Rodrigo Souza
The Steampunk Detective by Darrell Pitt
The Real Mason by Devlin, Julia
La cruz invertida by Marcos Aguinis


readsbookonline.com Copyright 2016 - 2024