12 August 2019

A DEATH KNELL FOR THE INTERNATIONAL NORMS OF CYBER CONFLICT

Pukhraj Singh

On July 8, Michael Schmitt, a law professor and former judge advocate in the US Air Force, posted a perplexing tweet about changing his mind on the “status of cyber capabilities as ‘weapons.’” He followed it up with the link to a recent paper he coauthored for the International Law Studies journal of the US Naval War College.

Schmitt is one of the key architects of the guiding document on international norms of cyber conflict, widely known as the Tallinn Manual. His latest paper severely curtails the legal logic that is the heart of the manual, which, even prior to Schmitt’s admission, was thought to be shaky at best. In fact, the newer set of assumptions proposed by Schmitt may also not stand up to scrutiny, further limiting the manual’s applicability to real-world scenarios.

A decade after it was initiated, the most prominent project that sought to define responsible state behavior in cyberspace has developed cracks.


In the aftermath of the Russian cyberattack against Estonia in 2007, which severed it from the internet, a group of legal luminaries was convened by a new cyber defense center in Tallinn to formulate the ground rules of “cyberwar.”

Under the direction of NATO, it tried importing the taxonomy of international humanitarian law (IHL)—a set of rules governing the predominantly physical conflicts of the past that had widespread “kinetic” effects. The result was the legally nonbinding Tallinn Manual released in 2013, with its iteration 2.0 coming out in 2017.

In his recent paper, Schmitt has come to the forgone conclusion that cyber capabilities are neither “weapons” nor “means,” but “methods” of warfare. Or in other words, cause and effect in cyber operations are not analogous to the use of the conventional munitions and weapon systems which IHL is habituated to.

He also conceded that the luminaries behind the Tallinn Manual were too quick to apply the legal shortcut of “reasoning by analogy”—with the unfounded hope that the parameters of cyber conflict would eventually be in accordance with the law of armed conflict, much like the emerging military technologies of the past. The obvious blind spots in that theory have become gaping holes now.

The Tallinn Manual belongs to a growing list of global initiatives—founded in an ambitious bid to claim some semblance of order in the wild west of cyberspace—which have done a volte-face.

The Wassenaar Arrangement—which in 2013 proposed an arms control treaty for “cyberweapons”—met a similar fate. The deliberations of the United Nations’ Group of Governmental Experts stretching over a period of thirteen years—aiming to calm the growing cyber hostilities between the United States, Russia, and China—fell flat in 2016.

The Tallinn Manual has fostered a global academic subculture of strategic thought that unquestionably proceeded with its flawed baseline assumptions. It is not as if technically minded cybersecurity professionals remained oblivious to its structural weaknesses. They had long seen the collapse coming.

Dave Aitel, who was the first to flag the obvious loopholes in the process, has called the Tallinn Manual the “Bowling Green Massacre” and the “Talmud of cyberwar.” Its esoteric wording and hurried consensus did not stand up to technical logic. Aitel is a former exploitation engineer of the National Security Agency and leading voice on the offensive aspects of cybersecurity.

The important details brushed over by the broad strokes in the Tallinn Manual are far too many to be cited: from how territory manifests in cyberspace and how cyber operations reflexively mutate to the haziness around intent and impact and the manual’s assumptions that certain blanket restrictions are enforceable.

The legal experts mistakenly assumed that cyberattacks have calculable and—most importantly—controllable effects. Any cyber operator worth her salt knows that even mission-driven, militaristic hacking thrives under great, terrifying ambiguity.

The military interpretation of an “armed attack” is derived from a clear understanding of cause and effect, or intent and impact. In the case of a tank, a cruise missile, or a bunker-buster bomb, those could be reasonably derived; this is not the case with a “cyberweapon.”

Col. Gary Brown, a former staff judge advocate of the US Cyber Command, believes that “both quantitatively and qualitatively, espionage and warfighting in cyberspace can be indistinguishable until the denouement.”

Brown adds that “policymakers have tended to view cyber operations as strictly delineated: offence or defence; espionage or military operations.” In his view, reality defies such stark categorization—“Determining when one type of cyber operation ends, and another begins, is challenging.”

This has had real-world consequences. While Stuxnet could be construed as an armed attack, it is reasonable to argue that the reconnaissance malware Flame that preceded it may have warned cyber operators of an imminent attack (“anticipatory self-defence” is a contestable but reasonably valid concept under the international law). Stuxnet could very well be a pre-emptive counteraction, much like the “Left of Launch” cyber strategy of the United States manifesting in Iran and North Korea.

Or if one deconstructs the Department of Justice’s indictments against the Russian hackers who interfered with the 2016 US elections, it is amply clear that the United States or its allies had pre-positioned cyber implants within Russia’s military networks. The espionage malware could have conveniently been repurposed or even reinterpreted as an act of aggression. That hypothetically provides a legal cover to the Russian act. It was a mode of retaliation to defend Russia’s own sovereignty, guaranteed by the law of armed conflict.

In fact, defending its disruptive action in a lawsuit filed by the Democratic National Convention in a New York court, the Russian government issued a “Statement of Immunity” in November 2018, claiming that the “military attack” was a “quintessential sovereign act.” Amusingly, the submission even went to the extent of invoking the provisions of the US Foreign Sovereign Immunities Act to bolster its argument. The Russian government, the statement argued, is fully justified in keeping the qualitative reasons behind that “sovereign act” to itself.

The ambiguity of cyberattacks is not just limited to the legal or operational interpretations but goes on to challenge the very fundamentals of computer science. A “cyberweapon” is a tool-chain which alters the behavior of targeted computers and networks using, what is generally called, exploitation. It throws the targeted system into a state that, however unpredictable, is intrinsic and not alien to its functionality.

Sergey Bratus, an associate professor at Darthmouth, believes that “advanced exploitation is rapidly becoming synonymous with the system operating exactly as designed—and yet getting manipulated by attackers.” Bratus even has a term for exploited systems that enter previously unknown states not part of their intended design: weird machines.

The crux of the matter is that a cyberattack battles extreme uncertainty and not the adversary to achieve its mission objectives. It is impossible to document its impact as malicious or unintended as it manifests over adversarial computing infrastructure—which are nothing but millions of layers of abstractions.

Despite the hundreds of millions of dollars, months of rehearsal, and a full-blown nuclear centrifuge testbed at their disposal, the operators behind Stuxnet could not foresee it going out of control. Should India, the third most infected country, have interpreted it as an armed attack?

Jason Healey, a senior research scholar at Columbia’s School for International and Public Affairs, enunciated in his book A Fierce Domain: “Cyber incidents have so far tended to have effects that are either widespread but fleeting, or persistent but narrowly focused. No attacks, thus far, have been both widespread and persistent.” Such a boundary condition is the direct result of the ambiguity of the operating environment.

The intent of a “cyberweapon” is not hardcoded in the machine instructions but is derived from an overwhelming set of probabilities, which throws proportional response into a complete tizzy. It is exactly why the hacking of a mere film studio like Sony Pictures or an accounting firm like ME Docs gets labeled as an act of war.

Cyberattacks rely on creating a potent, indiscernible mix of effects and perceptions. And they are massively cascading in terms of their effects, which could cause extreme but invisible damage to national security and sovereignty.

In Bytes, Bombs and Spies, Herbert Lin, a senior research scholar for cyber policy and security at Stanford’s Centre for International Security and Cooperation, exhorts that “offensive cyber operations act most directly on intangibles—information, knowledge, and confidence.” It is indeed a fallacy of the Tallinn Manual to measure the damage of cyberattacks with some kind of kinetic equivalent.

The version 2.0 of the Tallinn Manual also inserted 154 “black letter rules”—the thou-shalt-nots of cyber. It ridiculously bars states from hacking first responders like the computer emergency response teams (CERT) of the adversary. In fact, for a military keen on maintaining good operational security, hacking CERTs would be a mandatory prerequisite.

Schmitt’s raison d’etre for shifting his stance is that “operating instructions are a type of data known as computer, or program code.” Aitel believes that Schmitt is reframing the “entire conception of cyber capabilities as ‘communications of code,’ hence, indirect actions.” And indirect actions have unexpected consequences, a far cry from our understanding of munitions and weapons that formed the basis of the Tallinn Manual.

Moreover, the Tallinn Manual’s staunch insistence that the Westphalian precept of territoriality is somehow applicable to cyberspace is bizarre and regressive. The recently adopted “Defend Forward” strategy of Cyber Command emerged from decades of painful realization that the complex, tangled sinews of cyberspace make it a globally contested territory. It further espouses the concept of constant adversarial contact where a firm calculation of redlines becomes nearly impossible.

Richard Danzig, former secretary of the US Navy, is of the following opinion: “Successful strategies must proceed from the premise that cyberspace is continuously contested territory in which we can control memory and operating capabilities some of the time but cannot be assured of complete control all of the time or even of any control at any particular time.”

Thomas Dullien, a malware reverse engineer formerly employed by Google, stated at the 2018 NATO CyCon conference that “ownership,” “possession,” and “control” of data and assets in cyberspace necessarily do not overlap.

So, the pre-emptive hacking of supposedly “foreign” networks is not a modicum of dominance and aggression but order and control. It is the reason why Aitel argues that offense-defense is a misleading dichotomy, better replaced with “control and non-control.”

Another common theme in Schmitt’s paper and the Tallinn Manual is that “customary law” applies to new means of warfare before they are fielded.

As is evident, customary law is that facet of IHL that evolves from the innate customs and practices of nations. Brown’s counterpoint is that customary law for cyber operations can only emerge when nations expressly define their limits and capabilities for the domain.

Since cyber operations offer a perfect cover of plausible deniability, governments have shied away from even owning up to attacks—like Wannacry, Fancy Bear, Shamoon, and Stuxnet—that have been attributed with high confidence.

Moreover, the declaratory aspects of cyber capabilities suffers from the paradoxical “cyber commitment problem.” Precise declarations may give away cyber implants that are generally target-specific, while also disrupting the balance of power on which strategic deterrence rests (in cases where cyberattacks may act as covert counterforce).

Schmitt summarizes the paper with the takeaway that cyber operations or capabilities are indeed methods of warfare. While the terms “weapons,” “means,” and “methods” remain undefined in IHL, he largely implies that cyberattacks are a component of military’s overarching but legally vague TTPs—tactics, techniques, and procedures.

As innovative research by the Misinfosec Working Group of the Credibility Coalition suggests, cyber-enabled information operations also violate the foundational triad of cybersecurity: confidentiality, integrity, and availability. Or in other words, every cyber operation could be deemed as an information operation even after full denouement. It is not amply clear whether an information operation forms a means or a method of warfare.

A paper by Heather Harrison Dinniss of the Swedish Defence University (coauthored with Schmitt) cites caselaw like Nicaragua v. United States to deduce that an information operation is noticeably below the threshold of an armed attack necessary to invoke IHL. That may bestow another garb of impunity upon rogue cyberattacks.

There is also the subliminal geostrategic narrative that often gets overlooked. Aitel draws upon Clausewitz to state that “policy is just cyber war by other means.”

Contrary to how it appears, cyber policy is a convenient indulgence for nation-states to keep cyber offense fully potentiated. Like the occasional eruption of a geyser, it distracts us from the volcano that simmers deep underground.

While repeatedly snubbing Russia at the UN Group of Governmental Experts by watering down or vetoing proposals during the initial deliberations, the United States built a massive global dragnet on the sidelines. It was only after the breach of some redlines by Russia in cyberspace that the United States was seriously drawn into the discussion, but it was too late by then.

The Tallinn Manual’s schizoid assertions only add to the unstable nature of the domain. The damning indictment is that it is impossible to apply its statutes to any past cyber incident with full confidence; and that the thresholds of war may need to be recalibrated beyond the tested parameters of an armed attack to make sense of persistent, ongoing, and imperceptible cyber conflict.

The March 2018 Command Vision of Cyber Command readily admits that cyber operations are conventionally below the thresholds of armed attack or use of force. This suggests a remarkable devolution of its thresholds of war, options of proportional response, and rules of engagement.

Cyber Command has started uploading malware samples of foreign adversaries to public forums, comparable to an age-old hacker tactic called doxing. It even went to the extent of warning Russian trolls by sending direct messages. None of these actions would find a mention in the US military doctrine.

Cyber Command’s recent actions in Russia and Iran mark the explicit signaling of its cyber capabilities, much in line with the playbook that goes all the way to the philosophy posited by Gen. James Cartwright in the early 2000s. An early proponent of Cyber Command, Cartwright once said, “We’ve got to talk about our offensive capabilities and train for them; to make them credible so that people know that there’s a penalty.”

Gen. Michael Hayden, who led the strategic reposturing of the National Security Agency after 9/11 has admitted that the cyber domain remains “hideously over-classified.” Customary law for cyberspace may only emerge when militaries buck the trend of over-classification. Until then, much like targeted assassinations and other forms of irregular warfare, norm-violation may remain the only practical form of norm-setting.

The road to responsible state behavior in cyberspace is paved with bad intentions. Cyber power projection may keep on following the Thucydidean paradigm, “The strong do what they can, and the weak suffer what they must.”

Pukhraj Singh is a cyber intelligence analyst and has worked with the Indian government and security response teams of global companies. He blogs at www.pukhraj.me. The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

No comments: