Showing posts with label warfare. Show all posts
Showing posts with label warfare. Show all posts

Wednesday, July 25, 2018

Cyber threats


David Sanger's very interesting recent book, The Perfect Weapon: War, Sabotage, and Fear in the Cyber Age, is a timely read this month, following the indictments of twelve Russian intelligence officers for hacking the DNC in 2015. Sanger is a national security writer for the New York Times, and has covered cyber security issues for a number of years. He and William Broad and John Markoff were among the first journalists to piece together the story behind the Stuxnet attack on Iran's nuclear fuel program (the secret program called Olympic Games), and the book also offers some intriguing hints about the possibility of "left of launch" intrusions by US agencies into the North Korean missile program. This is a book that everyone should read. It greatly broadens the scope of what most of us think about under the category of "hacking". We tend to think of invasions of privacy and identity theft when we think of nefarious uses of the internet; but Sanger makes it clear that the stakes are much greater. The capabilities of current cyber-warfare tools have the possibility of bringing down whole national infrastructures, leading to massive civilian hardship.

There are several important takeaways from Sanger's book. One is the pervasiveness and power of the offensive cyber tools available to nation-state actors in penetrating and potentially disrupting or destroying the infrastructures of their potential opponents. Russia, China, North Korea, Iran, and the United States are all shown to possess tools of intrusion, data extraction, and system destruction that are extremely difficult for targeted countries and systems to defend against. The Sony attack (North Korea), the Office of Personnel Management (China), the attack on the Ukraine electric grid (Russia), the attack on Saudi Arabia's massive oil company Aramco (Iran), and the attack on the US electoral system (Russia) all proceeded with massive effect and without evident response from their victims or the United States. At this moment in time the balance of capability appears to favor the offense rather than the defense. A second important theme is the extreme level of secrecy that the US intelligence establishment has imposed on the capabilities it possesses for conducting cyber conflict. Sanger makes it clear that he believes that a greater level of public understanding of the capabilities and risks created by cyber weapons like Stuxnet would be beneficial in the United States and other countries, by permitting a more serious public debate about means and ends, risks and rewards of the use of cyber weapons. He likens it to the evolution of the Obama administration's eventual willingness to make a public case for the use of unmanned drone strikes against its enemies.

Third, Sanger makes it clear that the classic logic of deterrence that was successful in maintaining nuclear peace is less potent when it comes to cyber warfare and escalation. State-level adversaries have selected strategies of cyber attack precisely because of the relatively low cost of developing this technology, the relative anonymity of an attack once it occurs, and the difficulties faced by victims in selecting appropriate and effective counter-strikes that would deter the attacker in the future.

The National Security Agency gets a lot of attention in the book. The Office of Tailored Access Operations gets extensive discussion, based on revelations from the Snowden materials and other sources. Sanger makes it clear that the NSA had developed a substantial toolkit for intercepting communications and penetrating computer systems to capture data files of security interest. But according to Sanger it has also developed strong cyber tools for offensive use against potential adversaries. Part of the evidence for this judgment comes from the Snowden revelations (which are also discussed extensively). Part comes from what Sanger and others were able to discover about the workings of Stuxnet in targeting Iranian nuclear centrifuges over a many-month period. And part comes from suggestive reporting about the odd fact that North Korea's medium range missile tests were so spectacularly unsuccessful for a series of launches.

The book leads to worrisome conclusions and questions. US infrastructure and counter-cyber programs were highly vulnerable to attacks that have already taken place in our country. The extraction by Chinese military intelligence of millions of confidential personal records of US citizens from the Office of Personnel Management took place over months and was uncovered only after the damage was done. The effectiveness of Russian attacks on the Ukraine electric power grid suggest that similar attacks would be possible in other advanced countries, including the United States. All of these incidents suggest a level of vulnerability and potential for devastating attack that the public is not prepared for.

Thursday, March 9, 2017

Moral limits on war


World War II raised great issues of morality in the conduct of war. These were practical issues during the war, because that conflict approached "total war" -- the use of all means against all targets to defeat the enemy. So the moral questions could not be evaded: are there compelling reasons of moral principle that make certain tactics in war completely unacceptable, no matter how efficacious they might be said to be?

As Michael Walzer made clear in Just and Unjust Wars: A Moral Argument with Historical Illustrations in 1977, we can approach two rather different kinds of questions when we inquire about the morality of war. First, we can ask whether a given decision to go to war is morally justified given its reasons and purposes. This brings us into the domain of the theory of just war--self-defense against aggression, and perhaps prevention of large-scale crimes against humanity. And second, we can ask whether the strategies and tactics chosen are morally permissible. This forces us to think about the moral distinction between combatant and non-combatant, the culpable and the innocent, and possibly the idea of military necessity. The principle of double effect comes into play here -- the idea that unintended but predictable civilian casualties may be permissable if the intended target is a legitimate military target, and the unintended harms are not disproportionate to the value of the intended target.

We should also notice that there are two ways of approaching both issues -- one on the basis of existing international law and treaty, and the other on the basis of moral theory. The first treats the morality of war as primarily a matter of convention, while the latter treats it as an expression of valued moral principles. There is some correspondence between the two approaches, since laws and treaties seek to embody shared norms about warfare. And there are moral reasons why states should keep their agreements, irrespective of the content. But the rationales of the two approaches are different.

Finally, there are two different kinds of reasons why a people or a government might care about the morality of its conduct of war. The first is prudential: "if we use this instrument, then others may use it against us in the future". The convention outlawing the use of poison gas may fall in this category. So it may be argued that the conventions limiting the conduct of war are beneficial to all sides, even when there is a shortterm advantage in violating the convention. The second is a matter of moral principle: "if we use this instrument, we will be violating fundamental normative ideals that are crucial to us as individuals and as a people". This is a Kantian version of the morality of war: there are at least some issues that cannot be resolved based solely on consequences, but rather must be resolved on the basis of underlying moral principles and prohibitions. So executing hostages or prisoners of war is always and absolutely wrong, no matter what military advantages might ensue. Preserving the lives and well-being of innocents seems to be an unconditional moral duty in war. But likewise, torture is always wrong, not only because it is imprudent, but because it is fundamentally incompatible with treating people in our power in a way that reflects their fundamental human dignity.

The means of war-making chosen by the German military during World War II were egregious -- for example, shooting hostages, murdering prisoners, performing medical experiments on prisoners, and unrestrained strategic bombing of London. But hard issues arose on the side of the alliance that fought against German aggression as well. Particularly hard cases during World War II were the campaigns of "strategic bombing" against cities in Germany and Japan, including the firebombing of Dresden and Tokyo. These decisions were taken in the context of fairly clear data showing that strategic bombing did not substantially impair the enemy's ability to wage war industrially, and in the context of the fact that its primary victims were innocent civilians. Did the Allies make a serious moral mistake by making use of this tactic? Did innocent children and non-combatant adults pay the price in these most horrible ways of the decision to incinerate cities? Did civilian leaders fail to exercise sufficient control to prevent their generals from inflicting pet theories like the presumed efficacy of strategic bombing on whole urban populations?

And how about the decision to use atomic bombs against Hiroshima and Nagasaki? Were these decisions morally justified by the rationale that was offered -- that they compelled surrender by Japan and thereby avoided tens of thousands of combatant deaths ensuing from invasion? Were two bombs necessary, or was the attack on Nagasaki literally a case of overkill? Did the United Stares make a fateful moral error in deciding to use atomic bombs to attack cities and the thousands of non-combatants who lived there?

These kinds of questions may seem quaint and obsolete in a time of drone strikes, cyber warfare, and renewed nuclear posturing. But they are not. As citizens we have responsibility for the acts of war undertaken by our governments. We need to be clear and insistent in maintaining that the use of the instruments of war requires powerful moral justification, and that there are morally profound reasons for demanding that war tactics respect the rights and lives of the innocent. War, we must never forget, is horrible.

Geoffrey Robertson's Crimes Against Humanity: The Struggle for Global Justice poses these questions with particular pointedness. Also of interest is John Mearsheimer's Conventional Deterrence.

Saturday, March 4, 2017

The atomic bomb


Richard Rhodes' history of the development of the atomic bomb, The Making of the Atomic Bomb, is now thirty years old. The book is crucial reading for anyone who has the slightest anxiety about the tightly linked, high-stakes world we live in in the twenty-first century. The narrative Rhodes provides of the scientific and technical history of the era is outstanding. But there are other elements of the story that deserve close thought and reflection as well.

One is the question of the role of scientists in policy and strategy decision making before and during World War II. Physicists like Bohr, Szilard, Teller, and Oppenheimer played crucial roles in the science, but they also played important roles in the formulation of wartime policy and strategy as well. Were they qualified for these roles? Does being a brilliant scientist carry over to being an astute and wise advisor when it comes to the large policy issues of the war and international policies to follow? And if not the scientists, then who? At least a certain number of senior policy advisors to the Roosevelt administration, international politics experts all, seem to have badly dropped the ball during the war -- in ignoring the genocidal attacks on Europe's Jewish population, for example. Can we expect wisdom and foresight from scientists when it comes to politics, or are they as blinkered as the rest of us on average?

A second and related issue is the moral question: do scientists have any moral responsibilities when it comes to the use, intended or otherwise, of the technologies they spawn? A particularly eye-opening part of the story Rhodes tells is the research undertaken within the Manhattan Project about the possible use of radioactive material as a poisonous weapon of war against civilians on a large scale. The topic seems to have arisen as a result of speculation about how the Germans might use radioactive materials against civilians in Great Britain and the United States. Samuel Goutsmit, scientific director of the US military team responsible for investigating German progress towards an atomic bomb following the Normandy invasion, refers to this concern in his account of the mission in Alsos (7). According to Rhodes, the idea was first raised within the Manhattan Project by Fermi in 1943, and was realistically considered by Groves and Oppenheimer. This seems like a clear case: no scientist should engage in research like this, research aimed at discovering the means of the mass poisoning of half a million civilians.

Leo Szilard played an exceptional role in the history of the quest for developing atomic weapons (link). He more than other physicists foresaw the implications of the possibility of nuclear fission as a foundation for a radically new kind of weapon, and his fear of German mastery of this technology made him a persistent and ultimately successful advocate for a major research and industrial effort towards creating the bomb. His recruitment of Albert Einstein as the author of a letter to President Roosevelt underlining the seriousness of the threat and the importance of establishing a full scale effort made a substantial difference in the outcome. Szilard was entirely engaged in efforts to influence policy, based on his understanding of the physics of nuclear fission; he was convinced very early that a fission bomb was possible, and he was deeply concerned that German physicists would succeed in time to permit the Nazis to use such a weapon against Great Britain and the United States. Szilard was a physicist who also offered advice and influence on the statesmen who conducted war policy in Great Britain and the United States.

Niels Bohr is an excellent example to consider with respect to both large questions (link). He was, of course, one of the most brilliant and innovative physicists of his generation, recognized with the Nobel Prize in 1922. He was also a man of remarkable moral courage, remaining in Copenhagen long after prudence would have dictated emigration to Britain or the United States. He was more articulate and outspoken than most scientists of the time about the moral responsibilities the physicists undertook through their research on atomic energy and the bomb. He was farsighted about the implications for the future of warfare created by a successful implementation of an atomic or thermonuclear bomb. Finally, he is exceptional, on a par with Einstein, in his advocacy of a specific approach to international relations in the atomic age, and was able to meet with both Roosevelt and Churchill to make his case. His basic view was that the knowledge of fission could not be suppressed, and that the Allies would be best served in the long run by sharing their atomic knowledge with the USSR and working towards an enforceable non-proliferation agreement. The meeting with Churchill went particularly badly, with Churchill eventually maintaining that Bohr should be detained as a security risk.

Here is the memorandum that Bohr wrote to President Roosevelt in 1944 (link). Bohr makes the case for public sharing of the scientific and technical knowledge each nation has gained about nuclear weapons, and the establishment of a regime among nations that precludes the development and proliferation of nuclear weapons. Here are a few key paragraphs from his memorandum to Roosevelt:
Indeed, it would appear that only when the question is raised among the united nations as to what concessions the various powers are prepared to make as their contribution to an adequate control arrangement, will it be possible for any one of the partners to assure himself of the sincerity of the intentions of the others.

Of course, the responsible statesmen alone can have insight as to the actual political possibilities. It would, however, seem most fortunate that the expectations for a future harmonious international co-operation, which have found unanimous expressions from all sides within the united nations, so remarkably correspond to the unique opportunities which, unknown to the public, have been created by the advancement of science.
These thoughts are not put forward in the spirit of high-minded idealism; they are intended to serve as sober, fact-based guides to a more secure future. So it is worth considering: do the facts about international behavior justify the recommendations?In fact the world has settled on a hybrid set of approaches: the doctrine of deterrence based on mutual assured destruction, and a set of international institutions to which nations are signatories, intended to prevent or slow the proliferation of nuclear weapons. Another brilliant thinker and 2005 Nobel Prize winner, Thomas Schelling, provided the analysis that expresses the current theory of deterrence in his 1966 book Arms and Influence (link).

So who is closer to the truth when it comes to projecting the behavior of partially rational states and their governing apparatuses? My view is that the author of Micro Motives and Macro Behavior has the more astute understanding of the logic of disaggregated collective action and the ways that a set of independent strategies aggregate to the level of organizational or state-level behavior. Schelling's analysis of the logic of deterrence and the quasi-stability that it creates is compelling -- perhaps more so than Bohr's vision which depends at critical points on voluntary compliance.


This judgment receives support from international relations scholars of the following generation as well. For example, in an extensive article published in 1981 (link) Kenneth Waltz argues that nuclear weapons have helped to make international peace more stable, and his argument turns entirely on the rational-choice basis of the theory of deterrence:
What will a world populated by a larger number of nuclear states look like? I have drawn a picture of such a world that accords with experience throughout the nuclear age. Those who dread a world with more nuclear states do little more than assert that more is worse and claim without substantiation that new nuclear states will be less responsible and less capable of self-­control than the old ones have been. They express fears that many felt when they imagined how a nuclear China would behave. Such fears have proved un­rounded as nuclear weapons have slowly spread. I have found many reasons for believing that with more nuclear states the world will have a promising future. I have reached this unusual conclusion for six main reasons.

First, international politics is a self-­help system, and in such systems the principal parties do most to determine their own fate, the fate of other parties, and the fate of the system. This will continue to be so, with the United States and the Soviet Union filling their customary roles. For the United States and the Soviet Union to achieve nuclear maturity and to show this by behaving sensibly is more important than preventing the spread of nuclear weapons.

Second, given the massive numbers of American and Russian warheads, and given the impossibility of one side destroying enough of the other side’s missiles to make a retaliatory strike bearable, the balance of terror is indes­tructible. What can lesser states do to disrupt the nuclear equilibrium if even the mighty efforts of the United States and the Soviet Union cannot shake it? The international equilibrium will endure. (concluding section)
The logic of the rationality of cooperation, and the constant possibility of defection, seems to undermine the possibility of the kind of quasi-voluntary nuclear regime that Bohr hoped for -- one based on unenforceable agreements about the development and use of nuclear weapons. The incentives in favor of defection are too great.So this seems to be a case where a great physicist has a less than compelling theory of how an international system of nations might work. And if the theory is unreliable, then so are the policy recommendations that follow from it.

Tuesday, January 17, 2017

Signals intelligence and the management of military competition


In the past few years many observers have been alarmed by the high-tech realities of cyber-security, cyber-spying, and cyber-warfare. The current interest is on the apparent impunity with which government-sponsored intruders have managed to penetrate and exploit the computer systems of government and corporate organizations -- often extracting vast quantities of sensitive or classified information over extended periods of time. The Sony intrusion and the Office of Personnel Management intrusion represent clear examples of each (link, link). Gildart Jackson's Cyberspies: The Secret History of Surveillance, Hacking, and Digital Espionage provides a very interesting description of the contemporary realities of cyber-spying by governments and private intruders.

It is very interesting to realize that the cat-and-mouse game of using cryptography, electronic signals collection, and intelligence analysis to read an adversary's intentions and communications has a long history, and resulted in problems strikingly similar to those we currently face. A very good recent book that conveys a detailed narrative of the development of signals intelligence and cryptography since World War II is Stephen Budiansky's Code Warriors: NSA's Codebreakers and the Secret Intelligence War Against the Soviet Union. The book offers a surprisingly detailed account of the formation and management of the National Security Agency during the Truman presidency and the sophisticated efforts expended toward penetrating military and diplomatic codes since the Enigma successes of Bletchley Park.

There are several particularly interesting lessons to be learned from Code Warriors. One is a recognition of the remarkable resourcefulness and technical sophistication that was incorporated into the signals intelligence establishment in the 1940s and 1950s. Many of us think primarily of the achievements of Bletchley Park and the breaking of code systems like Enigma during World War II. But signals intelligence went far beyond cryptography. For example, a great deal of valuable intelligence resulted from "traffic analysis" -- specific information about time and location of various encrypted messages. Even without being able to read the messages themselves it was possible for analysts to draw inferences about military activity. This is an early version of meta-data analysis of email and phone calls.

Another surprise was the ability of intelligence establishment communications experts in the 1950s to use "side-channel" attacks to gain access to adversaries' communications channels (multi-channel radio teletype machines, for example). By recording the electromagnetic emissions, power fluctuations, and acoustic patterns of code machines, typewriters, and teletype machines it was possible to reconstruct the plain text that was passing through these devices.

Most interesting for readers of Understanding Society, however, are the large number of problems of organization, management, and leadership that effective intelligence service required. Several problems were particularly intractable. Inter-service rivalries were an enormous obstacle to effective collection, analysis, and use of signals intelligence. Motivating and retaining civilian experts as workers within a large research organization in the military was a second. And the problem of defending against misappropriation of documents and secrets by trusted insiders was another.

The problem of inter-agency rivalries and competition was debilitating and intractable. Army and Navy intelligence bureaus were enormously reluctant to subordinate their efforts to a single prioritized central agency. And this failure to cooperate and share information and processes led to substantial intelligence shortfalls.
The 1946 agreement between the Army and Navy to “coordinate” their separate signals intelligence operations had merely sidestepped glaring deficiencies in the entire arrangement, which was quickly proving itself unequal to the new technical and intelligence challenges they faced in attacking the Russian problem. (lc 1933)
But AFSA’s seventy-six-hundred-person staff and $35 million budget remained a small share of the total enterprise, and both the Army and Air Force cryptologic agencies continued to grab important projects for themselves. ASAPAC and USAFSS both duplicated AFSA’s work on Soviet and Chinese codes throughout the Korean War, and simply ignored attempts by AFSA to take charge of field processing within the theater. The Air Force had meanwhile established its headquarters of USAFSS at Brooks Air Force Base in Texas, a not too subtle attempt to escape from the Washington orbit altogether. (lc 2933)

AFSA was powerless to prevent even the most obvious duplication of effort: for over a year the Army and the Air Force both insisted on intercepting Russian and Chinese air communications, and it was not until March 1952, after months of negotiations, that ASA finally agreed to leave the job to the Air Force. The Navy meanwhile flatly refused to put its worldwide network of direction-finding stations—which provided the single most important source of information on the location and movement of Soviet surface ships and submarines—under central control. (lc 2949)
Also challenging was the problem of incorporating smart, innovative civilian experts into what had become rigid, hierarchical military organizations. Keeping these civilians -- often PhDs in mathematics -- motivated and productive within the strictures of a post-war military bureaucracy was exceptionally difficult. During WWII the atmosphere was conducive to innovative work:
At GC&CS and Arlington Hall in particular, formal lines of authority had never counted for much during the war; getting the job done was what mattered, and in large part because no one planned to make a career of the work, no one was very career-minded about office politics or promotion or pay or protecting their bureaucratic turf. Cecil Phillips remembered wartime Arlington Hall as a true “meritocracy” where a sergeant, who in a considerable number of cases might have a degree from MIT or Harvard or some other top school, and a lieutenant might work side by side as equals on the same problem and no one thought much about it. (lc 1417)
But after the war the bureaucratic military routines became a crushing burden:
At ASA, peace brought a flood of pettifogging orders, policy directives, and procedural instructions, accompanied by a succession of martinet junior officers who rotated in and out and often knew nothing about cryptanalysis but were sticklers for organization, military protocol, and the chain of command. Lengthy interoffice memoranda circulated dissecting the merits of developing a personnel handbook, or analyzing whether a proposed change in policy that would allow civilian employees of Arlington Hall to be admitted to the post movie theater was consistent with Paragraph 10, AR 210-389 of the Army Regulations. “Low pay and too many military bosses” would be a recurring complaint from ASA’s civilian workforce over the next few years, along with a sense that no matter how much experience they had or how qualified they were, the top positions in each division would always go to a less qualified Army officer. (lc 1430)
The problem of coordinating, directing, and managing these high-talent scientists proved to be an ever-challenging task for NSA as well:
Among the administrative nightmares of the explosively growing, disjointed, and highly technical top-secret organization that Canine inherited was a notable lack of skilled managers. That was a failing common to creative and technical enterprises, which always tended to attract people more at home dealing with abstract ideas than with their fellow human beings, but it was especially acute in the very abstract world of cryptanalysis. “I had a terrible time finding people that could manage,” Canine related. “We were long on technical brains at NSA and we were very short on management brains.” 50 The splintering of the work into hundreds of separate problems, each isolated technically and for security reasons from one another, exacerbated the difficulties of trying to assert managerial control on an organization made up of thousands of individualistic thinkers who marched to no identifiable drum known to management science. (lc 3582)
And of course the problem of insider spying turned out to be essentially insurmountable, from the defection of NSA employees William Martin and Bernon Mitchell in 1960 to the spy rings of John Walker from the 1960s to 1985 to the secret document collection and publication by Edward Snowden in 2013. Kim Philby comes into the story, having managed to position himself in Washington in a job that allowed him to collect and pass on the intelligence community's most intimate secrets (including the current status of its ability to decrypt Soviet codes and the progress being made at identifying Soviet agents within the US).

The agency's commitment to the polygraph as a way of evaluating employees' loyalty is, according to Budiansky, another source of organizational failure; the polygraph had no scientific validity, and the confidence it offered permitted the agency's security infrastructure to forego other more reliable ways of combatting insider spying.
As subsequent events would make all too clear, the touching faith that a piece of Edwardian pseudoscientific electrical gadgetry could safeguard the nation’s most important secrets would prove farcically mistaken, for almost every one of the real spies to betray NSA in the ensuing years passed a polygraph interview with flying colors, while obvious signs that in retrospect should have set off alarm bells about their behavior were blithely ignored, largely due to such misplaced confidence in hocus-pocus. (kl 3355)
Budiansky makes it clear that the extreme secrecy embedded within NSA was one of the organizational and political weaknesses of the entity. Its activities were kept secret from the political authorities of the country, and the agency was sometimes used to conceal intelligence considered to be harmful to those authorities. The case of the misuse of intelligence during the Tonkin Gulf crisis is a particularly clear example, where intelligence data were misused to support the administration's need to find an incident that could serve as a cause for war.
A classified, searingly honest accounting by NSA historian Robert J. Hanyok in 2001 found that in bolstering the administration’s version of events, NSA summary reports made use of only 15 of the relevant intercepts in its files, suppressing 122 others that all flatly contradicted the now “official” version of the August 4 events. Translations were altered; in one case two unrelated messages were combined to make them appear to have been from the same message; one of the NSA summary reports that did include a mention of signals relating to a North Vietnamese salvage operation obfuscated the timing to hide the fact that one of the recovered boats was being taken under tow at the very instant it was supposedly attacking the Maddox and Turner Joy . The original Vietnamese-language version of the August 4 attack message that had triggered the Critic alert meanwhile mysteriously vanished from NSA’s files. (kl 5096)
Budiansky is forthright in identifying the weaknesses and excesses of NSA and the intelligence services. But he also makes it clear how essential these capabilities are, from allowing the US to assess Soviet intentions during the Cuban Missile crisis to directing aircraft to hostile fighters on the basis of penetration of the air-to-air radio network in Korea and Vietnam. So the hard question for Budiansky, and for us as citizens, is how to structure and constrain the collection of intelligence so that it serves the goal of defending the country against attack without deviating into administrative chaos and politicized misdirection. There are many other expert organizations that have very similar dysfunctions, from advanced civilian scientific laboratories to modern corporate IT organizations. (Here is a discussion of Paul Rabinow's ethnography of the Cetus Corporation, the biotech research firm that invented PCR; link.)

Tuesday, June 26, 2012

Organizational failure as a meso cause


A recurring issue in the past few months here has been the validity of meso-level causal explanations of social phenomena. It is self-evident that we attribute causal powers to meso entities in ordinary interactions with the social world. We assent to statements like these; they make sense to us.
  • Reorganization of the city's service departments led to less corruption in Chicago.
  • Poor oversight and a culture of hyper-masculinity encourages sexual harassment in the workplace.
  • Divided command and control of military forces leads to ineffective response to attack.
  • Mining culture is conducive to social resistance.
We can gain some clarity on the role played by appeals to meso-level factors by considering a concrete example in detail. Military failure is a particularly interesting example to consider. Warfare proceeds through complex interlocking social organizations; it depends on information collection and interpretation; it requires the coordination of sometimes independent decision-makers; it involves deliberate antagonists striving deliberately to interfere with the strategies of the other; and it often leads to striking and almost incomprehensible failures. Eliot Cohen and John Gooch's Military Misfortunes: The Anatomy of Failure in War is a highly interesting study of military failure that makes substantial use of organizational sociology and the sociology of failure more broadly, so it provides a valuable example to consider.

Here are a few framing ideas that guide Cohen and Gooch in their analysis and selection of cases.
True military "misfortunes" -- as we define them -- can never be justly laid at the door of any one commander. They are failures of the organization, not of the individual. The other thing the failures we shall examine have in common is their apparently puzzling nature. Although something has clearly gone wrong, it is hard to see what; rather, it seems that fortune -- evenly balanced between both sides at the outset -- has turned against one side and favored the other. These are the occasions when it seems that the outcome of the battle depended at least as much on one side's mishandling of the situation as on the other's skill in exploiting a position of superiority ... The causes of organizational failure in the military world are not easy to discern. (3)
From the start, then, Cohen and Gooch are setting their focus on a meso-level factor -- features of organizations and their interrelations within a complex temporally extended period of activity.  They note that historians often start with the commander -- the familiar explanation of failures based on "operator error" -- as the culprit.  But as they argue in the cases they consider, this effort is as misguided in the case of military disaster as it is in other kinds of failure.  Much more fundamental are the organizational failures and system interactions that led to the misfortune. Take Field Marshal Douglas Haig, whose obstinate commitment to aggressive offense in the situation of trench warfare has been bitterly criticized as block-headed:
Not only was the high command confronted by a novel environment; it was also imprisoned in a system that made it well-nigh impossible to meet the challenges of trench warfare. The submissive obedience of Haig's subordinates, which Forester took for blinkered ignorance and whole-hearted support, was in reality the unavoidable consequence of the way in which the army high command functioned as an organization under its commander in chief. (13)
So why are organizations so central to the explanation of military failure?
Wherever people come together to carry out purposeful activity, organizations spring into being. The more complex and demanding the task, the more ordered and integrated the organization. ... Men form organizations, but they also work with systems. Whenever technological components are linked together in order to carry out a particular scientific or technological activity, the possibility exists that the normal sequence of events the system has been designed to carry out may go awry when failures in two or more components interact in an unexpected way. (21, 22)
And here is the crucial point: organizations and complexes of organizations (systems) have characteristics that sometimes produce features of coordinated action that are both unexpected and undesired.  A certain way of training officers may have been created in order to facilitate unity in combat; but it may also create a mentality of subordinacy that makes it difficult for officers to take appropriate independent action.  A certain system for managing the flow of materiel to the fighting forces may work fine in the leisurely circumstances of peace but quickly overload under the exigencies of war.  Weapon systems designed for central Europe may prove unreliable in North Africa.

Eliot and Gooch place organizational learning and information processing at the core of their theories of military failure.  They identify three kinds of failure: "failure to learn, failure to anticipate, and failure to adapt" (26). As a failure to learn, they cite the US Army's failure to learn from the French experience in Vietnam before designing its own strategies in the Vietnam War.  And they emphasize the unexpected interactions that can occur between different components of a complex organization like the military.  They recommend a "layered" analysis: "We look for the interactions between these organizations, as well as assess how well they performed their proper tasks and missions" (52).

The cases they consider correspond to this classification of failure.  They examine failure to learn in the case of American antisubmarine warfare in 1942; failure to anticipate in the case of the Israel Defense Forces' failure on the Golan Heights, 1973; and failure to adapt in the British disaster at Gallipoli, 1915.  Their example of aggregate failure, involving all three failures, is the defeat of the American Eighth Army in Korea, 1950.  And they reserve the grand prize, catastrophic failure, for the collapse of the French army and air force in 1940.

Each of these cases illustrates the authors' central thesis: that organizational failures are at the heart of many or most large military failures.  The example of the failure of the American antisubmarine campaign in 1942 off the east coast of the United States is particularly clear.  German submarines preyed at will on American shipping, placing a large question mark over the ability of the Americans to continue to resupply Allied forces in Europe.  The failure of American antisubmarine warfare was perplexing because the British navy had already developed proven and effective methods of ASW, and the American navy was aware of those methods.  Unhappily, Eliot and Gooch argue, the US Navy did not succeed in learning from this detailed wartime experience.

The factors that Eliot and Gooch cite include: insufficient patrol boats early in the campaign: insufficient training for pilots and patrol vessel crews; crucial failures on operational intelligence ("collection, organization, interpretation and dissemination of many different kinds of data"; 75); and, most crucially, organizational failures.
A prompt and accurate intelligence assessment would mean nothing if the analysts could not communicate that assessment directly to commanders on the scene, if those commanders did not have operational control over the various air and naval assets they required to protect shipping and sink U-boats, if they saw no reason to heed that intelligence, or if they had no firm notion of what to do about it. The working out of correct standard tactics ... could have no impact if destroyer skippers did not know or would not apply them. Moreover, as the U-boats changed their tactics and equipment ..., the antisubmarine forces needed to adopt compensating tactical changes and technological innovation. (76)
This contrasts with the British case:
The British system worked because it had developed an organizational structure that enabled the Royal Navy and RAF to make use of all of the intelligence at their disposal, to analyze it swiftly and accurately, and to disseminate it immediately to those who needed to have it. (77)
So why did the US Navy not adopt the British system of organization?  Here too the authors find organizational causes:
If the United States Navy had thought seriously about adapting its organization to the challenges of ASW in a fashion similar to that chosen by the British, it would have required major changes in how existing organizations operated, and in no case would this have been more true than that of intelligence. (89)
So in this case, Eliot and Gooch find that the failure of the US Navy to conduct effective antisubmarine warfare off the coast of the United States resides centrally in features of the organization of the Navy itself, and in collateral organizations like the intelligence service.

This is a good example of an effort to explain concrete and complex historical outcomes using the theoretical resources of organizational sociology.  And the language of causation is woven throughout the narrative.  The authors make a credible case for the view that the organizational features they identify caused (or contributed to the cause) of the large instances of military failure they identify.