Navigation page

Pages

Friday, March 31, 2017

Science policy and the Cold War


The marriage of science, technology, and national security took a major step forward during and following World War II. The secret Manhattan project, marshaling the energies and time of thousands of scientists and engineers, showed that it was possible for military needs to effectively mobilize and conduct coordinated research into fundamental and applied topics, leading to the development of the plutonium bomb and eventually the hydrogen bomb. (Richard Rhodes' memorable The Making of the Atomic Bomb provides a fascinating telling of that history.) But also noteworthy is the coordinated efforts made in advanced computing, cryptography, radar, operations research, and aviation. (Interesting books on several of these areas include Stephen Budiansky's Code Warriors: NSA's Codebreakers and the Secret Intelligence War Against the Soviet Union and Blackett's War: The Men Who Defeated the Nazi U-Boats and Brought Science to the Art of Warfare Warfare, and Dyson's Turing's Cathedral: The Origins of the Digital Universe.) Scientists served the war effort, and their work made a material difference in the outcome. More significantly, the US developed effective systems for organizing and directing the process of scientific research -- decision-making processes to determine which avenues should be pursued, bureaucracies for allocating funds for research and development, and motivational structures that kept the participants involved with a high level of commitment. Tom Hughes' very interesting Rescuing Prometheus: Four Monumental Projects that Changed Our World tells part of this story.

But what about the peace?

During the Cold War there was a new global antagonism, between the US and the USSR. The terms of this competition included both conventional weapons and nuclear weapons, and it was clear on all sides that the stakes were high. So what happened to the institutions of scientific and technical research and development from the 1950s forward?

Stuart Leslie addressed these questions in a valuable 1993 book, The Cold War and American Science: The Military-Industrial-Academic Complex at MIT and Stanford. Defense funding maintained and deepened the quantity of university-based research that was aimed at what were deemed important military priorities.
The armed forces supplemented existing university contracts with massive appropriations for applied and classified research, and established entire new laboratories under university management: MIT's Lincoln Laboratory (air defense); Berkeley's Lawrence Livermore Laboratory (nuclear weapons); and Stanford's Applied Electronics Laboratory (electronic communications and countermeasures). (8)
In many disciplines, the military set the paradigm for postwar American science. Just as the technologies of empire (specifically submarine telegraphy and steam power) once defined the relevant research programs for Victorian scientists and engineers, so the military-driven technologies of the Cold War defined the critical problems for the postwar generation of American accidents and engineers.... These new challenges defined what scientists and engineers studied, what they designed and built, where they went to work, and what they did when they got there. (9)
And Leslie offers an institutional prediction about knowledge production in this context:
Just as Veblen could have predicted, as American science became increasingly bound up in a web of military institutions, so did its character, scope, and methods take on new, and often disturbing, forms. (9)
The evidence for this prediction is offered in the specialized chapters that follow. Leslie traces in detail the development of major research laboratories at both universities, involving tens of millions of dollars in funding, thousands of graduate students and scientists, and very carefully focused on the development of sensitive technologies in radio, computing, materials, aviation, and weaponry.
No one denied that MIT had profited enormously in those first decades after the war from its military connections and from the unprecedented funding sources they provided. With those resources the Institute put together an impressive number of highly regarded engineering programs, successful both financially and intellectually. There was at the same time, however, a growing awareness, even among those who had benefited most, that the price of that success might be higher than anyone had imagined -- a pattern for engineering education set, organizationally and conceptually, by the requirements of the national security state. (43)
Leslie gives some attention to the counter-pressures to the military's dominance in research universities that can arise within a democracy in the closing chapter of the book, when the anti-Vietnam War movement raised opposition to military research on university campuses and eventually led to the end of classified research on many university campuses. He highlights the protests that occurred at MIT and Stanford during the 1960s; but equally radical protests against classified and military research happened in Madison, Urbana, and Berkeley.

This is a set of issues that are very resonant with Science, Technology and Society studies (STS). Leslie is indeed a historian of science and technology, but his approach does not completely share the social constructivism of that approach today. His emphasis is on the implications of the funding sources on the direction that research in basic science and technology took in the 1950s and 1960s in leading universities like MIT and Stanford. And his basic caution is that the military and security priorities associated with this structure all but guaranteed that the course of research was distorted in directions that would not have been chosen in a more traditional university research environment.

The book raises a number of important questions about the organization of knowledge and the appropriate role of universities in scientific research. In one sense the Vietnam War is a red herring, because the opposition it generated in the United States was very specific to that particular war. But most people would probably understand and support the idea that universities played a crucial role in World War II by discovering and developing new military technologies, and that this was an enormously important and proper role for scientists in universities to play. Defeating fascism and dictatorship was an existential need for the whole country. So the idea that university research is sometimes used and directed towards the interests of national security is not inherently improper.

A different kind of worry arises on the topic of what kind of system is best for guiding research in science and technology towards improving the human condition. In grand terms, one might consider whether some large fraction of the billions of dollars spent in military research between 1950 and 1980 might have been better spent on finding ways of addressing human needs directly -- and therefore reducing the likely future causes of war. Is it possible that we would today be in a situation in which famine, disease, global warming, and ethnic and racial conflict were substantially eliminated if we had dedicated as much attention to these issues as we did to advanced nuclear weapons and stealth aircraft?

Leslie addresses STS directly in "Reestablishing a Conversation in STS: Who’s Talking? Who’s Listening? Who Cares?" (link). Donald MacKenzie's Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance tells part of the same story with a greater emphasis on the social construction of knowledge throughout the process.

(I recall a demonstration at the University of Illinois against a super-computing lab in 1968 or 1969. The demonstrators were appeased when it was explained that the computer was being used for weather research. It was later widely rumored on the campus that the weather research in question was in fact directed towards considering whether the weather of Vietnam could be manipulated in a militarily useful way.)

Thursday, March 30, 2017

Social science or social studies?



A genuinely difficult question is this: does the idea of a rigorous "social science" really make sense, given what we know of the nature of the social world, the nature of human agency, and the nature of historical change?

There are of course large areas of social inquiry that involve genuine observation and measurement: demography, population health statistics, survey research, economic activity, social statistics of various kinds. Part of science is careful observation of a domain and analysis of the statistical patterns that emerge; so it is reasonable to say that demography, public health, and opinion research admit of rigorous empirical treatment.

Second, it is possible to single out complex historical events or processes for detailed empirical and historical study: the outbreak of WWI, the occurrence and spread of the Spanish influenza epidemic, the rise of authoritarian populism in Europe. Complex historical events like these admit of careful evidence-based investigation, designed to allow us to better understand the sequence of events and circumstances that made them up. And we can attempt to make sense of the connections that exist within such sequences, whether causal, cultural, or semiotic.

Third, it is possible to identify causal connections among social events or processes: effective transportation networks facilitate the diffusion of ideas and germs; price rises in a commodity result in decreases in consumption of the commodity; the density of an individual's social networks influences the likelihood of career success; etc. It is perfectly legitimate for social researchers to attempt to identify these causal connections and mechanisms, and further, to understand how these kinds of causal influence work in the social world. A key goal of science is explanation, and the kinds of inquiry mentioned here certainly admit of explanatory hypotheses. So explanation, a key goal of science, is indeed feasible in the social realm.

Fourth, there are "system" effects in the social world: transportation, communication, labor markets, electoral systems -- all these networks of interaction and influence can be seen to have effects on the pattern of social activity that emerge in the societies in which they exist. These kinds of effects can be studied from various points of view -- empirical, formal, simulations, etc. These kinds of investigation once again can serve as a basis for explanation of puzzling social phenomena.

This list of legitimate objects of empirical study in the social world, resulting in legitimate and evidence-based knowledge and explanation, can certainly be extended. And if being scientific means no more than conducting analysis of empirical phenomena based on observation, evidence, and causal inquiry, then we can reasonably say that it is possible to take a scientific attitude towards empirical problems like these.

But the hard question is whether there is more to social science than a fairly miscellaneous set of results that have emerged through study of questions like these. In particular, the natural sciences have aspired to formulating fundamental general theories that serve to systematize wide ranges of natural phenomena -- the theory of universal gravitation or the theory of evolution through natural selection, for example. The goal is to reduce the heterogeneity and diversity of natural phenomena to a few general theoretical hypotheses about the underlying reality of the natural world.

Are general theories like these possible in the social realm?

Some theorists have wanted to answer this question in the affirmative. Karl Marx, for example, believed that his theory of the capitalist mode of production provided a basis for systematizing and explaining a very wide range of social data about the modern social world. It was this supposed capacity for systematizing the data of the modern world that led Marx to claim that he was providing a "science of society".

But it is profoundly dubious that this theory, or any similarly general theory, can play the role of a fundamental theory of the social world, in the way that perhaps electromagnetic theory or quantum mechanics play a fundamental role in understanding the natural world.

The question may seem unimportant. But in fact, to call an area of inquiry "science" brings some associations that may not be at all justified in the case of study of the social world. In particular, science is often thought to be comprehensive, predictive, and verifiable. But knowledge of the social world falls short in each of these ways. There is no such thing as a comprehensive or foundational social theory, much as theorists like Marx have thought otherwise. Predictions in the social realm are highly uncertain and contingent. And it is rare to have a broad range of social data that serves to "confirm" or "verify" a general social theory.

Here is one possible answer to the question posed above, consistent with the points made here. Yes, social science is possible. But what social science consists in is an irreducible and pluralistic family of research methods, observations, explanatory hypotheses, and mid-level theories that permit only limited prediction and that cannot in principle serve to unify the social realm under a single set of theoretical hypotheses. There are no grand unifying theories in the social realm, only an open-ended set of theories of the middle range that can be used to probe and explain the social facts we can uncover through social and historical research.

In fact, to the extent that the ideas of contingency, heterogeneity, plasticity, and conjuncturality play the important role in the social world that I believe they do, it is difficult to avoid the conclusion that there are very narrow limits to the degree to which we can aspire to systematic or theoretical explanation in the social realm. And this in turn suggests that we might better describe social inquiry as a set of discrete and diverse social studies rather than unified "social science". We might think of the domain of social knowledge better in analogy to the contents of a large and diverse tool box than in analogy to an orrery that predicts the "motions" of social structures over time.


Tuesday, March 21, 2017

The soft side of critical realism


Critical realism has appealed to a range of sociologists and political scientists, in part because of the legitimacy it renders for the study of social structures and organizations. However, many of the things sociologists study are not "things" at all, but rather subjective features of social experience -- mental frameworks, identities, ideologies, value systems, knowledge frameworks. Is it possible to be a critical realist about "subjective" social experience and formations of consciousness? Here I want to argue in favor of a CR treatment of subjective experience and thought.

First, let's recall what it means to be realist about something. It means to take a cognitive stance towards the formation that treats it as being independent from the concepts we use to categorize it. It is to postulate that there are facts about the formation that are independent from our perceptions of it or the ways we conceptualize it. It is to attribute to the formation a degree of solidity in the world, a set of characteristics that can be empirically investigated and that have causal powers in the world. It is to negate the slogan, "all that is solid melts into air" with regard to these kinds of formations. "Real" does not mean "tangible" or "material"; it means independent, persistent, and causal.  

So to be realist about values, cognitive frameworks, practices, or paradigms is to assert that these assemblages of mental attitudes and features have social instantiation, that they persist over time, and that they have causal powers within the social realm. By this definition, mental frameworks are perfectly real. They have visible social foundations -- concrete institutions and practices through which they are transmitted and reproduced. And they have clear causal powers within the social realm.

A few examples will help make this clear.

Consider first the assemblage of beliefs, attitudes, and behavioral repertoires that constitute the race regime in a particular time and place. Children and adults from different racial groups in a region have internalized a set of ideas and behaviors about each other that are inflected by race and gender. These beliefs, norms, and attitudes can be investigated through a variety of means, including surveys and ethnographic observation. Through their behaviors and interactions with each other they gain practice in their mastery of the regime, and they influence outcomes and future behaviors. They transmit and reproduce features of the race regime to peers and children. There is a self-reinforcing discipline to such an assemblage of attitudes and behaviors which shapes the behaviors and expectations of others, both internally and coercively. This formation has causal effects on the local society in which it exists, and it is independent from the ideas we have about it. It is by this set of factors, a real part of local society. (If is also a variable and heterogeneous reality, across time and space.) We can trace the sociological foundations of the formation within the population, the institutional arrangements through which minds and behaviors are shaped. And we can identify many social effects of specific features of regimes like this. (Here is an earlier post on the race regime of Jim Crow; link, link.)

Here is a second useful example -- a knowledge and practice system like Six Sigma. This is a bundle of ideas about business management. It involves some fairly specific doctrines and technical practices. There are training institutions through which individuals become expert at Six Sigma. And there is a distributed group of expert practitioners across a number of companies, consulting firms, and universities who possess highly similar sets of knowledge, judgment, and perception.  This is a knowledge and practice community, with specific and identifiable causal consequences. 

These are two concrete examples. Many others could be offered -- workingclass solidarity, bourgeois modes of dress and manners, the social attitudes and behaviors of French businessmen, the norms of Islamic charity, the Protestant Ethic, Midwestern modesty. 

So, indeed, it is entirely legitimate to be a critical realist about mental frameworks. More, the realist who abjures study of such frameworks as social realities is doomed to offer explanations with mysterious gaps. He or she will find large historical anomalies, where available structural causes fail to account for important historical outcomes.

Consider Marx and Engels' words in the Communist Manifesto:
All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air, all that is holy is profaned, and man is at last compelled to face with sober senses his real conditions of life, and his relations with his kind.
This is an interesting riff on social reality, capturing both change and persistence, appearance and reality. A similar point of view is expressed in Marx's theory of the fetishism of commodities: beliefs exist, they have social origins, and it is possible to demystify them on occasion by uncovering the distortions they convey of real underlying social relations. 

There is one more perplexing twist here for realists. Both structures and features of consciousness are real in their social manifestations. However, one goal of critical philosophy is to show how the mental structures of a given class or gender are in fact false consciousness. It is a true fact that British citizens in 1871 had certain ideas about the workings of contemporary capitalism. But it is an important function of critical theory to demonstrate that those beliefs were wrong, and to more accurately account for the underlying social relations they attempt to describe. And it is important to discover the mechanisms through which those false beliefs came into existence.

So critical realism must both identify real structures of thought in society and demystify these thought systems when they systematically falsify the underlying social reality. Decoding the social realities of patriarchy, racism, and religious bigotry is itself a key task for a critical social sciences.

Dave Elder-Vass is one of the few critical realists who have devoted attention to the reality of a subjective social thing, a system of norms. In The Causal Power of Social Structures: Emergence, Structure and Agency he tries to show how the ideas of a "norm circle" helps explicate the objectivity, persistence, and reality of a socially embodied norm system. Here's is an earlier post on E-V's work (link).




Friday, March 17, 2017

Mechanisms according to analytical sociology


One of the distinguishing characteristics of analytical sociology is its insistence on the idea of causal mechanisms as the core component of explanation. Like post-positivists in other traditions, AS theorists specifically reject the covering law model of explanation and argues for a "realist" understanding of causal relations and powers: a causal relationship between x and y exists solely insofar as there exist one or more causal mechanisms producing it generating y given the occurrence of x. Peter Hedström puts the point this way in Dissecting the Social:
A social mechanism, as defined here, is a constellation of entities and activities that are linked to one another in such a way that they regularly bring about a particular type of outcome. (kl 181)
A basic characteristic of all explanations is that they provide plausible causal accounts for why events happen, why something changes over time, or why states or events co-vary in time or space. (kl 207)
The core idea behind the mechanism approach is that we explain not by evoking universal laws, or by identifying statistically relevant factors, but by specifying mechanisms that show how phenomena are brought about. (kl 334)
A social mechanism, as here defined, describes a constellation of entities and activities that are organized such that they regularly bring about a particular type of outcome. (kl 342)
So far so good. But AS adds another requirement about causal mechanisms in the social realm that is less convincing: that the only real or credible mechanisms are those involving the actions of individual actors. In other words, causal action in the social world takes place solely at the micro level. This assumption is substantial, non-trivial, and seemingly dogmatic. 
Sociological theories typically seek to explain social outcomes such as inequalities, typical behaviours of individuals in different social settings, and social norms. In such theories individuals are the core entities and their actions are the core activities that bring about the social-level phenomena that one seeks to explain. (kl 356)
Although the explanatory focus of sociological theory is on social entities, an important thrust of the analytical approach is that actors and actions are the core entities and activities of the mechanisms explaining plaining such phenomena. (kl 383)
The theory should also explain action in intentional terms. This means that we should explain an action by reference to the future state it was intended to bring about. Intentional explanations are important for sociological theory because, unlike causalist explanations of the behaviourist or statistical kind, they make the act 'understandable' in the Weberian sense of the term.' (kl 476)
Here is a table in which Hedström classifies different kinds of social mechanisms; significantly, all are at the level of actors and their mental states.


The problem with this "action-level" requirement on the nature of social mechanisms is that it rules out as a matter of methodology that there could be social causal processes that involve factors at higher social levels -- organizations, norms, or institutions, for example. (For that matter, it also rules out the possibility that some individual actions might take place in a way that is inaccessible to conscious knowledge -- for example, impulse, emotion, or habit.) And yet it is common in sociology to offer social explanations invoking causal properties of things at precisely these "meso" levels of the social world. For example:
Each of these represents a fairly ordinary statement of social causation in which a primary causal factor is an organization, an institutional arrangement, or a normative system.

It is true, of course, that such entities depends on the actions and minds of individuals. This is the thrust of ontological individualism (link, link): the social world ultimately depends on individuals in relation to each other and in relation to the modes of social formation through which their knowledge and action principles have been developed. But explanatory or methodological individualism does not follow from the truth of ontological individualism, any more than biological reductionism follows from the truth of physicalism. Instead, it is legitimate to attribute stable causal properties to meso-level social entities and to invoke those entities in legitimate social-causal explanations. Earlier arguments for meso-level causal mechanisms can be found here, here, and here.

This point about "micro-level dogmatism" leads me to believe that analytical sociology is unnecessarily rigid when it comes to causal processes in the social realm. Moreover, this rigidity leads it to be unreceptive to many approaches to sociology that are perfectly legitimate and insightful. It is as if someone proposed to offer a science of cooking but would only countenance statements at the level of organic chemistry. Such an approach would preclude the possibility of distinguishing different cuisines on the basis of the palette of spices and flavors that they use. By analogy, the many approaches to sociological research that proceed on the basis of an analysis of the workings of mid-level social entities and influences are excluded by the strictures of analytical sociology. Not all social research needs to take the form of the discovery of microfoundations, and reductionism is not the only scientifically legitimate strategy for explanation.

(The photo above of a moment from the Deepwater Horizon disaster is relevant to this topic, because useful accident analysis needs to invoke the features of organization that led to a disaster as well as the individual actions that produced the particular chain of events leading to the disaster. Here is an earlier post that explores this feature of safety engineering; link.)

Thursday, March 9, 2017

Moral limits on war


World War II raised great issues of morality in the conduct of war. These were practical issues during the war, because that conflict approached "total war" -- the use of all means against all targets to defeat the enemy. So the moral questions could not be evaded: are there compelling reasons of moral principle that make certain tactics in war completely unacceptable, no matter how efficacious they might be said to be?

As Michael Walzer made clear in Just and Unjust Wars: A Moral Argument with Historical Illustrations in 1977, we can approach two rather different kinds of questions when we inquire about the morality of war. First, we can ask whether a given decision to go to war is morally justified given its reasons and purposes. This brings us into the domain of the theory of just war--self-defense against aggression, and perhaps prevention of large-scale crimes against humanity. And second, we can ask whether the strategies and tactics chosen are morally permissible. This forces us to think about the moral distinction between combatant and non-combatant, the culpable and the innocent, and possibly the idea of military necessity. The principle of double effect comes into play here -- the idea that unintended but predictable civilian casualties may be permissable if the intended target is a legitimate military target, and the unintended harms are not disproportionate to the value of the intended target.

We should also notice that there are two ways of approaching both issues -- one on the basis of existing international law and treaty, and the other on the basis of moral theory. The first treats the morality of war as primarily a matter of convention, while the latter treats it as an expression of valued moral principles. There is some correspondence between the two approaches, since laws and treaties seek to embody shared norms about warfare. And there are moral reasons why states should keep their agreements, irrespective of the content. But the rationales of the two approaches are different.

Finally, there are two different kinds of reasons why a people or a government might care about the morality of its conduct of war. The first is prudential: "if we use this instrument, then others may use it against us in the future". The convention outlawing the use of poison gas may fall in this category. So it may be argued that the conventions limiting the conduct of war are beneficial to all sides, even when there is a shortterm advantage in violating the convention. The second is a matter of moral principle: "if we use this instrument, we will be violating fundamental normative ideals that are crucial to us as individuals and as a people". This is a Kantian version of the morality of war: there are at least some issues that cannot be resolved based solely on consequences, but rather must be resolved on the basis of underlying moral principles and prohibitions. So executing hostages or prisoners of war is always and absolutely wrong, no matter what military advantages might ensue. Preserving the lives and well-being of innocents seems to be an unconditional moral duty in war. But likewise, torture is always wrong, not only because it is imprudent, but because it is fundamentally incompatible with treating people in our power in a way that reflects their fundamental human dignity.

The means of war-making chosen by the German military during World War II were egregious -- for example, shooting hostages, murdering prisoners, performing medical experiments on prisoners, and unrestrained strategic bombing of London. But hard issues arose on the side of the alliance that fought against German aggression as well. Particularly hard cases during World War II were the campaigns of "strategic bombing" against cities in Germany and Japan, including the firebombing of Dresden and Tokyo. These decisions were taken in the context of fairly clear data showing that strategic bombing did not substantially impair the enemy's ability to wage war industrially, and in the context of the fact that its primary victims were innocent civilians. Did the Allies make a serious moral mistake by making use of this tactic? Did innocent children and non-combatant adults pay the price in these most horrible ways of the decision to incinerate cities? Did civilian leaders fail to exercise sufficient control to prevent their generals from inflicting pet theories like the presumed efficacy of strategic bombing on whole urban populations?

And how about the decision to use atomic bombs against Hiroshima and Nagasaki? Were these decisions morally justified by the rationale that was offered -- that they compelled surrender by Japan and thereby avoided tens of thousands of combatant deaths ensuing from invasion? Were two bombs necessary, or was the attack on Nagasaki literally a case of overkill? Did the United Stares make a fateful moral error in deciding to use atomic bombs to attack cities and the thousands of non-combatants who lived there?

These kinds of questions may seem quaint and obsolete in a time of drone strikes, cyber warfare, and renewed nuclear posturing. But they are not. As citizens we have responsibility for the acts of war undertaken by our governments. We need to be clear and insistent in maintaining that the use of the instruments of war requires powerful moral justification, and that there are morally profound reasons for demanding that war tactics respect the rights and lives of the innocent. War, we must never forget, is horrible.

Geoffrey Robertson's Crimes Against Humanity: The Struggle for Global Justice poses these questions with particular pointedness. Also of interest is John Mearsheimer's Conventional Deterrence.

Saturday, March 4, 2017

The atomic bomb


Richard Rhodes' history of the development of the atomic bomb, The Making of the Atomic Bomb, is now thirty years old. The book is crucial reading for anyone who has the slightest anxiety about the tightly linked, high-stakes world we live in in the twenty-first century. The narrative Rhodes provides of the scientific and technical history of the era is outstanding. But there are other elements of the story that deserve close thought and reflection as well.

One is the question of the role of scientists in policy and strategy decision making before and during World War II. Physicists like Bohr, Szilard, Teller, and Oppenheimer played crucial roles in the science, but they also played important roles in the formulation of wartime policy and strategy as well. Were they qualified for these roles? Does being a brilliant scientist carry over to being an astute and wise advisor when it comes to the large policy issues of the war and international policies to follow? And if not the scientists, then who? At least a certain number of senior policy advisors to the Roosevelt administration, international politics experts all, seem to have badly dropped the ball during the war -- in ignoring the genocidal attacks on Europe's Jewish population, for example. Can we expect wisdom and foresight from scientists when it comes to politics, or are they as blinkered as the rest of us on average?

A second and related issue is the moral question: do scientists have any moral responsibilities when it comes to the use, intended or otherwise, of the technologies they spawn? A particularly eye-opening part of the story Rhodes tells is the research undertaken within the Manhattan Project about the possible use of radioactive material as a poisonous weapon of war against civilians on a large scale. The topic seems to have arisen as a result of speculation about how the Germans might use radioactive materials against civilians in Great Britain and the United States. Samuel Goutsmit, scientific director of the US military team responsible for investigating German progress towards an atomic bomb following the Normandy invasion, refers to this concern in his account of the mission in Alsos (7). According to Rhodes, the idea was first raised within the Manhattan Project by Fermi in 1943, and was realistically considered by Groves and Oppenheimer. This seems like a clear case: no scientist should engage in research like this, research aimed at discovering the means of the mass poisoning of half a million civilians.

Leo Szilard played an exceptional role in the history of the quest for developing atomic weapons (link). He more than other physicists foresaw the implications of the possibility of nuclear fission as a foundation for a radically new kind of weapon, and his fear of German mastery of this technology made him a persistent and ultimately successful advocate for a major research and industrial effort towards creating the bomb. His recruitment of Albert Einstein as the author of a letter to President Roosevelt underlining the seriousness of the threat and the importance of establishing a full scale effort made a substantial difference in the outcome. Szilard was entirely engaged in efforts to influence policy, based on his understanding of the physics of nuclear fission; he was convinced very early that a fission bomb was possible, and he was deeply concerned that German physicists would succeed in time to permit the Nazis to use such a weapon against Great Britain and the United States. Szilard was a physicist who also offered advice and influence on the statesmen who conducted war policy in Great Britain and the United States.

Niels Bohr is an excellent example to consider with respect to both large questions (link). He was, of course, one of the most brilliant and innovative physicists of his generation, recognized with the Nobel Prize in 1922. He was also a man of remarkable moral courage, remaining in Copenhagen long after prudence would have dictated emigration to Britain or the United States. He was more articulate and outspoken than most scientists of the time about the moral responsibilities the physicists undertook through their research on atomic energy and the bomb. He was farsighted about the implications for the future of warfare created by a successful implementation of an atomic or thermonuclear bomb. Finally, he is exceptional, on a par with Einstein, in his advocacy of a specific approach to international relations in the atomic age, and was able to meet with both Roosevelt and Churchill to make his case. His basic view was that the knowledge of fission could not be suppressed, and that the Allies would be best served in the long run by sharing their atomic knowledge with the USSR and working towards an enforceable non-proliferation agreement. The meeting with Churchill went particularly badly, with Churchill eventually maintaining that Bohr should be detained as a security risk.

Here is the memorandum that Bohr wrote to President Roosevelt in 1944 (link). Bohr makes the case for public sharing of the scientific and technical knowledge each nation has gained about nuclear weapons, and the establishment of a regime among nations that precludes the development and proliferation of nuclear weapons. Here are a few key paragraphs from his memorandum to Roosevelt:
Indeed, it would appear that only when the question is raised among the united nations as to what concessions the various powers are prepared to make as their contribution to an adequate control arrangement, will it be possible for any one of the partners to assure himself of the sincerity of the intentions of the others.

Of course, the responsible statesmen alone can have insight as to the actual political possibilities. It would, however, seem most fortunate that the expectations for a future harmonious international co-operation, which have found unanimous expressions from all sides within the united nations, so remarkably correspond to the unique opportunities which, unknown to the public, have been created by the advancement of science.
These thoughts are not put forward in the spirit of high-minded idealism; they are intended to serve as sober, fact-based guides to a more secure future. So it is worth considering: do the facts about international behavior justify the recommendations?In fact the world has settled on a hybrid set of approaches: the doctrine of deterrence based on mutual assured destruction, and a set of international institutions to which nations are signatories, intended to prevent or slow the proliferation of nuclear weapons. Another brilliant thinker and 2005 Nobel Prize winner, Thomas Schelling, provided the analysis that expresses the current theory of deterrence in his 1966 book Arms and Influence (link).

So who is closer to the truth when it comes to projecting the behavior of partially rational states and their governing apparatuses? My view is that the author of Micro Motives and Macro Behavior has the more astute understanding of the logic of disaggregated collective action and the ways that a set of independent strategies aggregate to the level of organizational or state-level behavior. Schelling's analysis of the logic of deterrence and the quasi-stability that it creates is compelling -- perhaps more so than Bohr's vision which depends at critical points on voluntary compliance.


This judgment receives support from international relations scholars of the following generation as well. For example, in an extensive article published in 1981 (link) Kenneth Waltz argues that nuclear weapons have helped to make international peace more stable, and his argument turns entirely on the rational-choice basis of the theory of deterrence:
What will a world populated by a larger number of nuclear states look like? I have drawn a picture of such a world that accords with experience throughout the nuclear age. Those who dread a world with more nuclear states do little more than assert that more is worse and claim without substantiation that new nuclear states will be less responsible and less capable of self-­control than the old ones have been. They express fears that many felt when they imagined how a nuclear China would behave. Such fears have proved un­rounded as nuclear weapons have slowly spread. I have found many reasons for believing that with more nuclear states the world will have a promising future. I have reached this unusual conclusion for six main reasons.

First, international politics is a self-­help system, and in such systems the principal parties do most to determine their own fate, the fate of other parties, and the fate of the system. This will continue to be so, with the United States and the Soviet Union filling their customary roles. For the United States and the Soviet Union to achieve nuclear maturity and to show this by behaving sensibly is more important than preventing the spread of nuclear weapons.

Second, given the massive numbers of American and Russian warheads, and given the impossibility of one side destroying enough of the other side’s missiles to make a retaliatory strike bearable, the balance of terror is indes­tructible. What can lesser states do to disrupt the nuclear equilibrium if even the mighty efforts of the United States and the Soviet Union cannot shake it? The international equilibrium will endure. (concluding section)
The logic of the rationality of cooperation, and the constant possibility of defection, seems to undermine the possibility of the kind of quasi-voluntary nuclear regime that Bohr hoped for -- one based on unenforceable agreements about the development and use of nuclear weapons. The incentives in favor of defection are too great.So this seems to be a case where a great physicist has a less than compelling theory of how an international system of nations might work. And if the theory is unreliable, then so are the policy recommendations that follow from it.