Showing posts with label physics. Show all posts
Showing posts with label physics. Show all posts

Wednesday, June 30, 2021

Decision-making for big physics


Big science is largely dominant in many areas of science -- for example, high-energy physics, medical research, the human genome project, and pandemic research. Other areas of science still function well in a "small science" framework -- mathematics, evolutionary biology, or social psychology, for example, with a high degree of decentralized decision-making by individual researchers, universities, and laboratories. But in areas where scientific research requires vast investments of public funds over decades, we are forced to ask a hugely important question: Can governmental agencies act rationally and intelligently in planning for investments in "big science"?

Consider the outcome we would like to see: adoption of a well-funded and well-coordinated multi-investigator, multi-institutional, multi-year research effort well designed to achieve important scientific results. This is the ideal result. What is required in order to make it a reality? Here are the key activities of information-gathering and decision-making that are needed in order to arrive at a successful national agenda for an area of big-science research.

  1. selection of one or more research strategies that have the best likelihood of bringing about important scientific results
  2. a budgeting process and series of decisions that make these strategies feasible
  3. implementation of a multi-year plan (often over multiple research sites) implementing the chosen strategy
  4. oversight and management of the scientific research sites and expenditures to ensure that the strategy is faithfully carried out by talented scientists, researchers, and directors

In A New Social Ontology of Government: Consent, Coordination, and Authority I argue that governments, agencies, and large private organizations have a great deal of difficulty in carrying out large, extended plans. There I highlight principal-agent problems, conflicting priorities across sub-groups, faulty information sharing, and loose coupling within a large organization as some of the primary sources of dysfunction within a large organization (including a national government or large governmental agency). And it is apparent that all of these sources of dysfunction are present in the process of designing, funding, and managing a national science agenda.

Consider item 1 above: selection of a research strategy for scientific research. At any given time in the development of a field of research there is a body of theory and experimental findings that constitute what is currently known; there are experts (scientists) who have considered judgments about what the most important unanswered questions are, and what technologies or experimental investments would be most productive in illuminating those questions; and there are influential figures within government and industry who have preferences and beliefs about the direction that future research ought to take. 

Suppose government has created an agency -- call it the Office of High Energy Physics -- which is charged to arrive at a plan for future directions and funding for research in the field of high energy physics. (There is in fact the Office of High Energy Physics located within the Department of Energy which has approximately this responsibility. But here I am considering a hypothetical agency.) How should the director and senior staff of OHEP proceed? 

They will recognize that they need rigorous and developed analysis from a group of senior physicists. The judgments of the best physicists in the national research and university community are surely the best (though fallible) source of guidance about the direction that future physics research should take. So OHEP constitutes a permanent committee of advisors who are tasked to assess the current state of the field and arrive at a consensus view of the most productive direction for future investments in high-energy physics research.

The Standing Scientific Committee is not a decision-making committee, however; rather, it prepares reports and advice for the senior staff and director of OHEP. And the individuals who make up the senior staff themselves have been selected for having a reasonable level of scientific expertise; further, they have their own "pet" projects and ideas about what topics are likely to be the most important. So the senior staff and the Standing Committee are in a complex relationship with each other. The Standing Scientific Committee collectively has greater intellectual authority in the scientific field; many are Nobel-quality physicists. But the senior staff have greater influence on the decisions that the Office makes about strategies and future plans. The staff are always there, whereas the Standing Committee does its work episodically. Moreover, the senior staff has an ability to influence the deliberations of the Standing Committee in a variety of ways, including setting the agenda of the Standing Committee, giving advice about the likelihood of funding of various possible strategies, and so forth. Finally, it is worth noting that a group of twenty senior physicists from a range of institutions throughout the country are likely to have interests of their own that will find their way into the deliberations, leading to disagreements about priorities. In short, the process of designing a plan for the next ten years of investments in high-energy physics research is not a purely rational and scientific exercise; it is also a process in which interests, influence, and bureaucratic manipulation play crucial roles.

Now turn to item 2 above, the budgeting issue. Decisions about funding of fundamental scientific research result from a political, legislative, and bureaucratic process. Congressional committees will be involved in the decision whether to allocate $5 billion, $10 billion, or $15 billion in high-energy physics research in the coming decade. And Congressional committees have their own sources of bias and dysfunction: legislators' political interests in their districts, relationships with powerful industries and lobbyists, and ideological beliefs that legislators bring to their work. These political and economic interests may influence the legislative funding process to favor one strategy over another -- irrespective of the scientific merits of the alternatives. (If one strategy brings more investment to the home state of a powerful Senator, this may tilt the funding decision accordingly.) Further, the system of Congressional staff work can be further analyzed in terms of the interests and priorities of the senior staffers doing the work -- leading once again to the likelihood that funding decisions will be based on considerations other than the scientific merits of various strategies for research. (Recall the debacle of Congressional influence on the Osprey VTOL aircraft development process.) 

Items 3 and 4 introduce a new set of possible dysfunctions into the process, through the likelihood of principal-agent problems across research sites. Directors of the National Laboratories (like Fermilab or Lawrence Berkeley National Laboratory, for example) have their own interests and priorities, and they have a fairly wide range of discretion in decisions about implementation of national research priorities. So securing coordination of research efforts across laboratories and research sites introduces another source of uncertainty in the implementation and execution of a national strategy for physics research. This is an instance of "loose coupling", a factor that has led organizational theorists to come to expect a fair degree of divergence across the large network of sub-organizations that make up the national research system. Thomas Hughes considers these kinds of problems in Rescuing Prometheus: Four Monumental Projects That Changed the Modern World; link

These observations do not imply that rational science policy is impossible; but they do underline the difficulties that arise within normal governmental and private institutions that interfere with the idealized process of selection and implementation of an optimal strategy of scientific research. The colossal failure of the Superconducting Super Collider -- a multi-billion dollar project in high-energy physics that was abandoned in 1993 after many years of development and expenditure -- illustrates the challenges that national science planning encounters (link). Arguably, one might hold that the focus at Fermilab on neutrino detection is another failure (DUNE) -- not because it was not implemented, but because it fails the test of making possible fundamental new discoveries in physics.

Several interdisciplinary fields take up questions like these, including Science and Technology Studies and Social Construction of Technology studies. Hackett, Amsterdamska, Lynch, and Wajcman's Handbook of Science and Technology Studies provides a good exposure to the field. Here is a prior post that attempts to locate big science within an STS framework. And here is a post on STS insights into science policy during the Cold War (link).


Thursday, June 25, 2020

STS and big science


A previous post noted the rapid transition in the twentieth century from small physics (Niels Bohr) to large physics (Ernest Lawrence). How should we understand the development of scientific knowledge in physics during this period of rapid growth and discovery?

One approach is through the familiar methods and narratives of the history of science -- what might be called "internal history of science". Researchers in the history of science generally approach the discipline from the point of view of discovery, intellectual debate, and the progress of scientific knowledge. David Cassidy's book  Beyond Uncertainty: Heisenberg, Quantum Physics, and The Bomb is sharply focused on the scientific and intellectual debates in which Heisenberg was immersed during the development of quantum theory. His book is fundamentally a narrative of intellectual discovery. Cassidy also takes on the moral-political issue of serving a genocidal state as a scientist; but this discussion has little to do with the history of science that he offers. Peter Galison is a talented and imaginative historian of science, and he asks penetrating questions about how to explain the advent of important new scientific ideas. His treatment of Einstein's theory of relativity in Einstein's Clocks and Poincare's Maps: Empires of Time, for example, draws out the importance of the material technology of clocks and the intellectual influences that flowed through the social networks in which Einstein was engaged for Einstein's basic intuitions about space and time. But Galison too is primarily interested in telling a story about the origins of intellectual innovation.

It is of course valuable to have careful research studies of the development of science from the point of view of the intellectual context and concepts that influenced discovery. But fundamentally this approach leaves largely unexamined the difficult challenge: how do social, economic, and political institutions shape the direction of science?

The interdisciplinary field of science, technology, and society studies (STS) emerged in the 1970s as a sociological discipline that looked at laboratories, journals, and universities as social institutions, with their own interests, conflicts, and priorities. Hackett, Amsterdamska, Lynch, and Wajcman's Handbook of Science and Technology Studies provides a good exposure to the field. The editors explain that they consulted widely across researchers in the field, and instead of a unified and orderly "discipline" they found many cross-cutting connections and concerns.
What emerged instead is a multifaceted interest in the changing practices of knowledge production, concern with connections among science, technology, and various social institutions (the state, medicine, law, industry, and economics more generally), and urgent attention to issues of public participation, power, democracy, governance, and the evaluation of scientific knowledge, technology, and expertise. (kl 98)
The guiding idea of STS is that science is a socially situated human activity, embedded within sets of social and political relations and driven by a variety of actors with diverse interests and purposes. Rather than imagining that scientific knowledge is the pristine product of an impersonal and objective "scientific method" pursued by selfless individuals motivated solely by the search for truth, the STS field works on the premise that the institutions and actors within the modern scientific and technological system are unavoidably influenced by non-scientific interests. These include commercial interests (corporate-funded research in the pharmaceutical industry), political interests (funding agencies that embody the political agendas of the governing party), military interests (research on fields of knowledge and technological development that may have military applications), and even ideological interests (Lysenko's genetics and Soviet ideology). All of these different kinds of influence are evident in Hiltzik's account in Big Science: Ernest Lawrence and the Invention that Launched the Military-Industrial Complex of the evolution of the Berkeley Rad Lab, described in the earlier post.

In particular, individual scientists must find ways of fitting their talents, imagination, and insight into the institutions through which scientific research proceeds: universities, research laboratories, publication outlets, and sources of funding. And Hiltzik's book makes it very clear that a laboratory like the Radiation Lab that Lawrence created at the University of California-Berkeley must be crafted and designed in a way that allows it to secure the funds, equipment, and staff that it needs to carry forward the process of fundamental research, discovery, and experimentation that the researchers and the field of high-energy physics wished to conduct.

STS scholars sometimes sum up these complex social processes of institutions, organizations, interests, and powers leading to scientific and technological discovery as the "social construction of technology" (SCOT). And, indeed, both the course of physics and the development of the technologies associated with advanced physics research were socially constructed -- or guided, or influenced -- throughout this extended period of rapid advancement of knowledge. The investments that went into the Rad Lab did not go into other areas of potential research in physics or chemistry or biology; and of course this means that there were discoveries and advances that were delayed or denied as a result. (Here is a recent post on the topic of social influences on the development of technology; link.)

The question of how decisions are made about major investments in scientific research programs (including laboratories, training, and cultivation of new generations of science) is a critically important one. In an idealized way one would hope for a process in which major multi-billion dollar and multi-decade investments in specific research programs would be made in a rational way, incorporating the best judgments and advice of experts in the relevant fields of science. One of the institutional mechanisms through which national science policy is evaluated and set is the activity of the National Academy of Science, Engineering, and Medicine (NASEM) and similar expert bodies (link). In physics the committees of the American Physical Society are actively engaged in assessing the present and future needs of the fundamental science of the discipline (link). And the National Science Foundation and National Institutes of Health have well-defined protocols for peer assessment of research proposals. So we might say that science investment and policy in the US have a reasonable level of expert governance. (Here is an interesting status report on declining support for young scientists in the life sciences in the 1990s from an expert committee commissioned by NASEM (link). This study illustrates the efforts made by learned societies to assess the progress of research and to recommend policies that will be needed for future scientific progress.)

But what if the institutions through which these decisions are made are decidedly non-expert and bureaucratized -- Congress or the Department of Energy, for example, in the case of high-energy physics? What if the considerations that influence decisions about future investments are importantly directed by political or economic interests (say, the economic impact of future expansion of the Fermilab on the Chicago region)? What if companies that provide the technologies underlying super-conductor electromagnets needed for one strategy but not another are able to influence the decision in their favor? What are the implications for the future development of physics and other areas of science of these forms of non-scientific influence? (The decades-long case of the development of the V-22 Osprey aircraft is a case in point, where pressures on members of Congress from corporations in their districts led to the continuation of the costly project long after the service branches concluded it no longer served the needs of the services; link.)

Research within the STS field often addresses these kinds of issues. But so do researchers in organizational studies who would perhaps not identify themselves as part of the STS field. There is a robust tradition within sociology itself on the sociology of science. Robert Merton was a primary contributor with his book The Sociology of Science: Theoretical and Empirical Investigations (link). In organizational sociology Jason Owen-Smith's recent book Research Universities and the Public Good: Discovery for an Uncertain Future provides an insightful analysis of how research universities function as environments for scientific and technological research (link). And many other areas of research within contemporary organizational studies are relevant as well to the study of science as a socially constituted process. A good example of recent approaches in this field is Richard Scott and Gerald Davis, Organizations and Organizing: Rational, Natural and Open Systems Perspectives.

The big news for big science this week is the decision by CERN's governing body to take the first steps towards establishment of the successor to the Large Hadron Collider, at an anticipated cost of 21 billion euros (link). The new device would be an electron-positron collider, with a plan to replace it later in the century with a proton-proton collider. Perhaps naively, I am predisposed to think that CERN's decision-making and priority-setting processes are more fully guided by scientific consensus than is the Department of Energy's decision-making process. However, it would be very helpful to have in-depth analysis of the workings of CERN, given the key role that it plays in the development of high-energy physics today. Here is an article in Nature reporting efforts by social-science observers like Arpita Roy, Knorr Cetina, and John Krige to arrive at a more nuanced understanding of the decision-making processes at work within CERN (link).

Wednesday, June 24, 2020

Big physics and small physics




When Niels Bohr traveled to Britain in 1911 to study at the Cavendish Laboratory at Cambridge, the director was J.J. Thompson and the annual budget was minimal. In 1892 the entire budget for supplies, equipment, and laboratory assistants was a little over about £1400 (Dong-Won Kim, Leadership and Creativity: A History of the Cavendish Laboratory, 1871-1919 (Archimedes), p. 81). Funding derived almost entirely from a small allocation from the University (about £250) and student fees deriving from lectures and laboratory use at the Cavendish (about £1179). Kim describes the finances of the laboratory in these terms:
Lack of funds had been a chronic problem of the Cavendish Laboratory ever since its foundation. Although Rayleigh had established a fund for the purchase of necessary apparatus, the Cavendish desperately lacked resources. In the first years of J.J.’s directorship, the University’s annual grant to the laboratory of about £250 did not increase, and it was used mainly to pay the wages of the Laboratory assistants (£214 of this amount, for example, went to salaries in 1892). To pay for the apparatus needed for demonstration classes and research, J.J. relied on student fees. 
Students ordinarily paid a fee of £1.1 to attend a lecture course and a fee of £3.3 to attend a demonstration course or to use space in the Laboratory. As the number of students taking Cavendish courses increased, so did the collected fees. In 1892, these fees totaled £1179; in 1893 the total rose a bit to £1240; and in 1894 rose again to £1409. Table 3.5 indicates that the Cavendish’s expenditures for “Apparatus, Stores, Printing, &c.” (£230 3s 6d in 1892) nearly equaled the University’s entire grant to the Cavendish (£254 7s 6d in 1892). (80)
The Cavendish Laboratory exerted great influence on the progress of physics in the early twentieth century; but it was distinctly organized around a "small science" model of research. (Here is an internal history of the Cavendish Lab; link.) The primary funding for research at the Cavendish came from the university itself, student fees, and occasional private gifts to support expansion of laboratory space, and these funds were very limited. And yet during those decades, there were plenty of brilliant physicists at work at the Cavendish Lab. Much of the future of twentieth century physics was still to be written, and Bohr and many other young physicists who made the same journey completely transformed the face of physics. And they did so in the context of "small science".

Abraham Pais's intellectual and scientific biography of Bohr, Niels Bohr's Times: In Physics, Philosophy, and Polity, provides a detailed account of Bohr's intellectual and personal development. Here is Pais's description of Bohr's arrival at the Cavendish Lab:
At the time of Bohr's arrival at the Cavendish, it was, along with the Physico-Technical Institute in Berlin, one of the world's two leading centers in experimental physics research. Thomson, its third illustrious director, successor to Maxwell and Rayleigh, had added to its distinction by his discovery of the electron, work for which he had received the Nobel Prize in 1906. (To date the Cavendish has produced 22 Nobel laureates.) In those days, 'students from all over the world looked to work with him... Though the master's suggestions were, of course, most anxiously sought and respected, it is no exaggeration to add that we were all rather afraid he might touch some of our apparatus.' Thomson himself was well aware that his interaction with experimental equipment was not always felicitous: 'I believe all the glass in the place is bewitched.' ... Bohr knew of Thomson's ideas on atomic structure, since these are mentioned in one of the latter's books which Bohr had quoted several times in his thesis. This problem was not yet uppermost in his mind, however, when he arrived in Cambridge. When asked later why he had gone there for postdoctoral research he replied: 'First of all I had made this great study of the electron theory. I considered... Cambridge as the center of physics and Thomson as a most wonderful man.' (117, 119)
On the origins of his theory of the atom:
Bohr's 1913 paper on α-particles, which he had begun in Manchester, and which had led him to the question of atomic structure, marks the transition to his great work, also of 1913, on that same problem. While still in Manchester, he had already begun an early sketch of these entirely new ideas. The first intimation of this comes from a letter, from Manchester, to Harald: 'Perhaps I have found out a little about the structure of atoms. Don't talk about it to anybody... It has grown out of a little information I got from the absorption of α-rays.' (128)
And his key theoretical innovation:
Bohr knew very well that his two quoted examples had called for the introduction of a new and as yet mysterious kind of physics, quantum physics. (It would become clear later that some oddities found in magnetic phenomena are also due to quantum effects.) Not for nothing had he written in the Rutherford memorandum that his new hypothesis 'is chosen as the only one which seems to offer a possibility of an explanation of the whole group of experimental results, which gather about and seems to confirm conceptions of the mechanismus [sic] of the radiation as the ones proposed by Planck and Einstein'. His reference in his thesis to the radiation law concerns of course Planck's law (5d). I have not yet mentioned the 'calculations of heat capacity' made by Einstein in 1906, the first occasion on which the quantum was brought to bear on matter rather than radiation. (138)
But here is the critical point: Bohr's pivotal contributions to physics derived from exposure to the literature in theoretical physics at the time, his own mathematical analysis of theoretical assumptions about the constituents of matter, and exposure to laboratories whose investment involved only a few thousand pounds.

Now move forward a few decades to 1929 when Ernest Lawrence conceived of the idea of the cyclical particle accelerator, the cyclotron, and soon after founded the Radiation Lab at Berkeley. Michael Hiltzik tells this story in Big Science: Ernest Lawrence and the Invention that Launched the Military-Industrial Complex, and it is a very good case study documenting the transition from small science to big science in the United States. The story demonstrates the vertiginous rise of large equipment, large labs, large funding, and big science. And it demonstrates the deeply interwoven careers of fundamental physics and military and security priorities. Here is a short description of Ernest Lawrence:
Ernest Lawrence’s character was a perfect match for the new era he brought into being. He was a scientific impresario of a type that had seldom been seen in the staid world of academic research, a man adept at prying patronage from millionaires, philanthropic foundations, and government agencies. His amiable Midwestern personality was as much a key to his success as his scientific genius, which married an intuitive talent for engineering to an instinctive grasp of physics. He was exceptionally good-natured, rarely given to outbursts of temper and never to expressions of profanity. (“ Oh, sugar!” was his harshest expletive.) Raising large sums of money often depended on positive publicity, which journalists were always happy to deliver, provided that their stories could feature fascinating personalities and intriguing scientific quests. Ernest fulfilled both requirements. By his mid-thirties, he reigned as America’s most famous native-born scientist, his celebrity validated in November 1937 by his appearance on the cover of Time over the cover line, “He creates and destroys.” Not long after that, in 1939, would come the supreme encomium for a living scientist: the Nobel Prize. (kl 118)
And here is Hiltzik's summary of the essential role that money played in the evolution of physics research in this period:
Money was abundant, but it came with strings. As the size of the grants grew, the strings tautened. During the war, the patronage of the US government naturally had been aimed toward military research and development. But even after the surrenders of Germany and Japan in 1945, the government maintained its rank as the largest single donor to American scientific institutions, and its military goals continued to dictate the efforts of academic scientists, especially in physics. World War II was followed by the Korean War, and then by the endless period of existential tension known as the Cold War. The armed services, moreover, had now become yoked to a powerful partner: industry. In the postwar period, Big Science and the “military-industrial complex” that would so unnerve President Dwight Eisenhower grew up together. The deepening incursion of industry into the academic laboratory brought pressure on scientists to be mindful of the commercial possibilities of their work. Instead of performing basic research, physicists began “spending their time searching for ways to pursue patentable ideas for economic rather than scientific reasons,” observed the historian of science Peter Galison. As a pioneer of Big Science, Ernest Lawrence would confront these pressures sooner than most of his peers, but battles over patents—not merely what was patentable but who on a Big Science team should share in the spoils—would soon become common in academia. So too would those passions that government and industry shared: for secrecy, for regimentation, for big investments to yield even bigger returnsParticle accelerators became the critical tool in experimental physics. A succession of ever-more-powerful accelerators became the laboratory apparatus through which questions and theories being developed in theoretical physics could be pursued by bombarding targets with ever-higher energy particles (protons, electrons, neutrons). Instead of looking for chance encounters with high-energy cosmic rays, it was possible to use controlled processes within particle accelerators to send ever-higher energy particles into collisions with a variety of elements. (kl 185)
What is intriguing about Hiltzik's story is the fascinating interplay of separate factors the narrative invokes: major developments in theoretical physics (primarily in Europe), Lawrence's accidental exposure to a relevant research article, the personal qualities and ambition of Lawrence himself, the imperatives and opportunities for big physics created by atomic bomb research in the 1940s, and the institutional constraints and interests of the University of California. This is a story of the advancement of physics that illustrates a huge amount of contingency and path dependency during the 1930s through 1950s. The engineering challenges of building and maintaining a particle accelerator were substantial as well, and if those challenges could not be surmounted the instrument would be impossible. (Maintaining a vacuum in a super-large canister itself proved to be a huge technical challenge.)

Physics changed dramatically between 1905 and 1945, and the balance between theoretical physics and experimental physics was one important indicator of this change. And the requirements of experimental physics went from the lab bench to the cyclotron -- from a few hundred dollars (pounds, marks, krone, euros) of investment to hundreds of millions of dollars (and now billions) in investment. This implied, fundamentally, that scientific research evolved from an individual activity taking place in university settings to an activity involving the interests of the state, big business, and the military -- in addition to the scientific expertise and imagination of the physicists.

Monday, April 20, 2020

The Malthusian problem for scientific research


It seems that there is a kind of inverse Malthusian structure to scientific research and knowledge. Topics for research and investigation multiply geometrically, while actual research and the creation of knowledge can only proceed in a selective and linear way. This is true in every field -- natural science, biology, social science, poetry. Take Darwin. He specialized in finches for a good while. But he could easily have taken up worms, beetles, or lizards, or he could have turned to conifers, oak trees, or cactuses. The evidence of speciation lies everywhere in the living world, and it is literally impossible for a generation of scientists of natural history to study them all.

Or consider a topic of current interest to me, the features that lead to dysfunctional performance in organizations large and small. Once we notice that the specific workings of an organization lead to harmful patterns that we care about a great deal, it makes sense to consider case studies of an unbounded number of organizations in every sector. How did the UAW work such that rampant corruption emerged? What features of the Chinese Communist Party led it to the profound secrecy tactics routinely practiced by its officials? What features of the Xerox Corporation made it unable to turn the mouse-based computer interface system into a commercial blockbuster? Each of these questions suggests the value of an organized case study, and surely we would learn a lot from each study. But each such study takes a person-year to complete, and a given scholar is unlikely to want to spend the rest of her career doing case studies like these. So the vast majority of such studies will never be undertaken. 

This observation has very intriguing implications for the nature of our knowledge about the world -- natural, biological, and social. It seems to imply that our knowledge of the world will always be radically incomplete, with vast volumes of research questions unaddressed and sources of empirical phenomena unexamined. We might take it as a premise that there is nothing in the world that cannot be understood if investigated scientifically; but these reflections suggest that we are still forced to conclude that there is a limitless range of phenomena that have not been investigated, and will never be.

It is possible that philosophers of physics would argue that this "incompleteness" result does not apply to the realm of physical phenomena, because physics is concerned to discover a small number of fundamental principles and laws about how the micro- and macro-worlds of physical phenomena work. The diversity of the physical world is then untroubling, because every domain of physics can be subsumed under these basic principles and theories. Theories of gravitation, subatomic particles and forces, space-time relativity, and the quantum nature of the world are obscure but general and simple, and there is at least the hope that we might arrive at a comprehensive physics with the resources needed to explain all physical phenomena, from black-hole pairs to the nature of dark matter.

Whatever the case with physics, the phenomena of the social world are plainly not regulated by a simple set of fundamental principles and laws. Rather, heterogeneity, exception, diversity, and human creativity are fundamental characteristics of the social world. And this implies the inherent incompleteness of social knowledge. Variation and heterogeneity are the rule; so novel cases are always available, and studying them always leads to new insights and knowledge. Therefore there are always domains of phenomena that have not yet been examined, understood, or explained. This conclusion is a bit like the diagonal proof of the existence of irrational numbers that drove Cantor mad: every number can be represented as an infinite decimal, and yet for every list of infinite decimals it is simple to generate another infinite decimal that is not on the list.

Further, in this respect it may seem that the biological realm resembles the social realm in these respects, so that biological science is inherently incomplete as well. Even granting that the theories of evolution and natural selection are fundamental and universal in biological systems, the principles specified in these theories guarantee diversification and variation in biological outcomes. As a result we might argue that the science of living systems too is inherently incomplete, with new areas of inquiry outstripping the ability of the scientific enterprise to investigate them. In a surprising way the uncertainties we confront in the Covid-19 crisis seem to illustrate this situation. We don't know whether this particular virus will stimulate an enduring immunity in individuals who have experienced the infection, and "first principles" in virology do not seem to afford a determinate answer to the question.

Consider these two patterns. The first is woven linen; the second is the pattern of habitat for invasive species across the United States. The weave of the linen is mechanical and regular; it covers all parts of the space with a grid of fiber. The second is the path-dependent result of invasion of habitat by multiple invasive species. Certain areas are intensively inhabited, while other areas are essentially free of invasive species. The regularity of the first image is a design feature of the process that created the fabric; the irregularity and variation of the second image is the consequence of multiple independent and somewhat stochastic yet opportunistic exploratory movements of the various species. Is scientific research more similar to the first pattern or the second?




I would suggest that scientific research more resembles the second process than the first. Researchers are guided by their scientific curiosity, the availability of research funding, and the assumptions about the importance of various topics embodied in their professions; and the result is a set of investigations and findings that are very intensive in some areas, while completely absent in other areas of the potential "knowledge space".

Is this a troubling finding? Only if one thought that the goal of science is to eventually provide an answer to every possible empirical question, and to provide a general basis for explaining everything. If, on the other hand, we believe that science is an open-ended process, and that the selection of research topics is subject to a great deal of social and personal contingency, then the incompleteness of science comes as no surprise. Science is always exploratory, and there is much to explore in human experience.

(Several earlier posts have addressed the question of defining the scope of the social sciences; link, link, link, link, link.)

Saturday, March 4, 2017

The atomic bomb


Richard Rhodes' history of the development of the atomic bomb, The Making of the Atomic Bomb, is now thirty years old. The book is crucial reading for anyone who has the slightest anxiety about the tightly linked, high-stakes world we live in in the twenty-first century. The narrative Rhodes provides of the scientific and technical history of the era is outstanding. But there are other elements of the story that deserve close thought and reflection as well.

One is the question of the role of scientists in policy and strategy decision making before and during World War II. Physicists like Bohr, Szilard, Teller, and Oppenheimer played crucial roles in the science, but they also played important roles in the formulation of wartime policy and strategy as well. Were they qualified for these roles? Does being a brilliant scientist carry over to being an astute and wise advisor when it comes to the large policy issues of the war and international policies to follow? And if not the scientists, then who? At least a certain number of senior policy advisors to the Roosevelt administration, international politics experts all, seem to have badly dropped the ball during the war -- in ignoring the genocidal attacks on Europe's Jewish population, for example. Can we expect wisdom and foresight from scientists when it comes to politics, or are they as blinkered as the rest of us on average?

A second and related issue is the moral question: do scientists have any moral responsibilities when it comes to the use, intended or otherwise, of the technologies they spawn? A particularly eye-opening part of the story Rhodes tells is the research undertaken within the Manhattan Project about the possible use of radioactive material as a poisonous weapon of war against civilians on a large scale. The topic seems to have arisen as a result of speculation about how the Germans might use radioactive materials against civilians in Great Britain and the United States. Samuel Goutsmit, scientific director of the US military team responsible for investigating German progress towards an atomic bomb following the Normandy invasion, refers to this concern in his account of the mission in Alsos (7). According to Rhodes, the idea was first raised within the Manhattan Project by Fermi in 1943, and was realistically considered by Groves and Oppenheimer. This seems like a clear case: no scientist should engage in research like this, research aimed at discovering the means of the mass poisoning of half a million civilians.

Leo Szilard played an exceptional role in the history of the quest for developing atomic weapons (link). He more than other physicists foresaw the implications of the possibility of nuclear fission as a foundation for a radically new kind of weapon, and his fear of German mastery of this technology made him a persistent and ultimately successful advocate for a major research and industrial effort towards creating the bomb. His recruitment of Albert Einstein as the author of a letter to President Roosevelt underlining the seriousness of the threat and the importance of establishing a full scale effort made a substantial difference in the outcome. Szilard was entirely engaged in efforts to influence policy, based on his understanding of the physics of nuclear fission; he was convinced very early that a fission bomb was possible, and he was deeply concerned that German physicists would succeed in time to permit the Nazis to use such a weapon against Great Britain and the United States. Szilard was a physicist who also offered advice and influence on the statesmen who conducted war policy in Great Britain and the United States.

Niels Bohr is an excellent example to consider with respect to both large questions (link). He was, of course, one of the most brilliant and innovative physicists of his generation, recognized with the Nobel Prize in 1922. He was also a man of remarkable moral courage, remaining in Copenhagen long after prudence would have dictated emigration to Britain or the United States. He was more articulate and outspoken than most scientists of the time about the moral responsibilities the physicists undertook through their research on atomic energy and the bomb. He was farsighted about the implications for the future of warfare created by a successful implementation of an atomic or thermonuclear bomb. Finally, he is exceptional, on a par with Einstein, in his advocacy of a specific approach to international relations in the atomic age, and was able to meet with both Roosevelt and Churchill to make his case. His basic view was that the knowledge of fission could not be suppressed, and that the Allies would be best served in the long run by sharing their atomic knowledge with the USSR and working towards an enforceable non-proliferation agreement. The meeting with Churchill went particularly badly, with Churchill eventually maintaining that Bohr should be detained as a security risk.

Here is the memorandum that Bohr wrote to President Roosevelt in 1944 (link). Bohr makes the case for public sharing of the scientific and technical knowledge each nation has gained about nuclear weapons, and the establishment of a regime among nations that precludes the development and proliferation of nuclear weapons. Here are a few key paragraphs from his memorandum to Roosevelt:
Indeed, it would appear that only when the question is raised among the united nations as to what concessions the various powers are prepared to make as their contribution to an adequate control arrangement, will it be possible for any one of the partners to assure himself of the sincerity of the intentions of the others.

Of course, the responsible statesmen alone can have insight as to the actual political possibilities. It would, however, seem most fortunate that the expectations for a future harmonious international co-operation, which have found unanimous expressions from all sides within the united nations, so remarkably correspond to the unique opportunities which, unknown to the public, have been created by the advancement of science.
These thoughts are not put forward in the spirit of high-minded idealism; they are intended to serve as sober, fact-based guides to a more secure future. So it is worth considering: do the facts about international behavior justify the recommendations?In fact the world has settled on a hybrid set of approaches: the doctrine of deterrence based on mutual assured destruction, and a set of international institutions to which nations are signatories, intended to prevent or slow the proliferation of nuclear weapons. Another brilliant thinker and 2005 Nobel Prize winner, Thomas Schelling, provided the analysis that expresses the current theory of deterrence in his 1966 book Arms and Influence (link).

So who is closer to the truth when it comes to projecting the behavior of partially rational states and their governing apparatuses? My view is that the author of Micro Motives and Macro Behavior has the more astute understanding of the logic of disaggregated collective action and the ways that a set of independent strategies aggregate to the level of organizational or state-level behavior. Schelling's analysis of the logic of deterrence and the quasi-stability that it creates is compelling -- perhaps more so than Bohr's vision which depends at critical points on voluntary compliance.


This judgment receives support from international relations scholars of the following generation as well. For example, in an extensive article published in 1981 (link) Kenneth Waltz argues that nuclear weapons have helped to make international peace more stable, and his argument turns entirely on the rational-choice basis of the theory of deterrence:
What will a world populated by a larger number of nuclear states look like? I have drawn a picture of such a world that accords with experience throughout the nuclear age. Those who dread a world with more nuclear states do little more than assert that more is worse and claim without substantiation that new nuclear states will be less responsible and less capable of self-­control than the old ones have been. They express fears that many felt when they imagined how a nuclear China would behave. Such fears have proved un­rounded as nuclear weapons have slowly spread. I have found many reasons for believing that with more nuclear states the world will have a promising future. I have reached this unusual conclusion for six main reasons.

First, international politics is a self-­help system, and in such systems the principal parties do most to determine their own fate, the fate of other parties, and the fate of the system. This will continue to be so, with the United States and the Soviet Union filling their customary roles. For the United States and the Soviet Union to achieve nuclear maturity and to show this by behaving sensibly is more important than preventing the spread of nuclear weapons.

Second, given the massive numbers of American and Russian warheads, and given the impossibility of one side destroying enough of the other side’s missiles to make a retaliatory strike bearable, the balance of terror is indes­tructible. What can lesser states do to disrupt the nuclear equilibrium if even the mighty efforts of the United States and the Soviet Union cannot shake it? The international equilibrium will endure. (concluding section)
The logic of the rationality of cooperation, and the constant possibility of defection, seems to undermine the possibility of the kind of quasi-voluntary nuclear regime that Bohr hoped for -- one based on unenforceable agreements about the development and use of nuclear weapons. The incentives in favor of defection are too great.So this seems to be a case where a great physicist has a less than compelling theory of how an international system of nations might work. And if the theory is unreliable, then so are the policy recommendations that follow from it.

Tuesday, February 28, 2017

Discovering the nucleus




In the past year or so I've been reading a handful of fascinating biographies and histories involving the evolution of early twentieth-century physics, paying attention to the individuals, the institutions, and the ideas that contributed to the making of post-classical physics. The primary focus is on the theory of the atom and the nucleus, and the emergence of the theory of quantum mechanics. The major figures who have come into this complex narrative include Dirac, Bohr, Heisenberg, von Neumann, Fermi, Rutherford, Blackett, Bethe, and Feynman, along with dozens of other mathematicians and physicists. Institutions and cities played a key role in this story -- Manchester, Copenhagen, Cambridge, Göttingen, Budapest, Princeton, Berkeley, Ithaca, Chicago. And of course written throughout this story is the rise of Nazism, World War II, and the race for the atomic bomb. This is a crucially important period in the history of science, and the physics that was created between 1900 and 1960 has fundamentally changed our view of the natural world.



One level of interest for me in doing this reading is the math and physics themselves. As a high school student I was fascinated with physics. I learned some of the basics of the story of modern physics before I went to college -- the ideas of special relativity theory, the hydrogen spectrum lines, the twin-slit experiments, the puzzles of radiation and the atom leading to the formulation of the quantum theory of electromagnetic radiation, the discoveries of superconductivity and lasers. In college I became a physics and mathematics major at the University of Illinois, though I stayed with physics only through the end of the first two years of course work (electricity and magnetism, theoretical and applied mechanics, several chemistry courses, real analysis, advanced differential equations). (Significantly for the recent reading I've been doing, I switched from physics to philosophy while I was taking the junior level quantum mechanics course.) I completed a mathematics major, along with a philosophy degree, and did a PhD in philosophy because I felt philosophy offered a broader intellectual platform on questions that mattered.

So I've always felt I had a decent layman's understanding of the questions and issues driving modern physics. One interesting result of reading all this historical material about the period of 1910-1935, however, is that I've realized what large holes there are in my mental map of the topics, both in the physics and the math. And it is genuinely interesting to realize that there are deeply fascinating questions in this terrain which I haven't really got an inkling about. It is energizing to know that it is entirely possible to open up new areas of knowledge and inquiry for oneself. 

Of enduring interest in this story is the impression that emerges of amazingly rapid progress in physics in these few decades, with major discoveries and new mathematical methods emerging in weeks and months rather than decades and centuries. The intellectual pace in places like Copenhagen, Princeton, and Göttingen was staggering, and scientists like Bohr, von Neumann, and Heisenberg genuinely astonish the reader with the fertility of their scientific abilities. Moreover, the theories and mathematical formulations that emerged had amazingly precise and unexpected predictive consequences. Physical theory and experimentation reached a fantastic degree of synergy together. 

The institutions of research that developed through this period are fascinating as well. The Cavendish lab at Cambridge, the Institute for Advanced Studies at Princeton, the Niels Bohr Institute in Copenhagen, the math and physics centers at Göttingen, and the many conferences and journals of the period facilitated rapid progress of atomic and nuclear physics. The USSR doesn't come into the story as fully as one would like, and it is intriguing to speculate about the degree to which Stalinist dogmatism interfered with the development of Soviet physics. 

I also find fascinating in retrospect the relations that seem to exist between physics and the philosophy of science in the twentieth century. In philosophy we tend to think that the discipline of the philosophy of science in its twentieth-century development was too dependent on physics. That is probably true. But it seems that the physics in question was more often classical physics and thermodynamics, not modern mathematical physics. Carnap, for example, gives no serious attention to developments in the theory of quantum mechanics in his lectures, Philosophical Foundations of Physics. The philosophy of the Vienna Circle could have reflected relativity theory and quantum mechanics, but it didn't to any significant degree. Instead, the achievements of nineteenth-century physics seem to have dominated the thinking of Carnap, Schlick, and Popper. Logical positivism doesn't seem to be much influenced by modern physics, including relativity theory, quantum theory, and mathematical physics.  Post-positivist philosophers Kuhn, Hanson, and Feyerabend refer to some of the discoveries of twentieth-century physics, but their works don't add up to a new foundation for the philosophy of science. Since the 1960s there has been a robust field of philosophy of physics, and the focus of this field has been on quantum mechanics; but the field has had only limited impact on the philosophy of science more broadly. (Here is a guide to the philosophy of physics provided to philosophy graduate students at Princeton; link.)

On the other hand, quantum mechanics itself seems to have been excessively influenced by a hyper version of positivism and verificationism. Heisenberg in particular seems to have favored a purely instrumentalist and verificationist interpretation of quantum mechanics -- the idea that the mathematics of quantum mechanics serve solely to summarize the results of experiment and observation, not to allow for true statements about unobservables. It is anti-realist and verificationist.

I suppose that there are two rather different ways of reading the history of twentieth-century physics. One is that quantum mechanics and relativity theory demonstrate that the physical world is incomprehensibly different from our ordinary Euclidean and Kantian ideas about ordinary-sized objects -- with the implication that we can't really understand the most fundamental level of the physical world. Ordinary experience and relativistic quantum-mechanical reality are just fundamentally incommensurable. But the other way of reading this history of physics is to marvel at the amount of new insight and clarity that physics has brought to our understanding of the subatomic world, in spite of the puzzles and anomalies that seem to remain. Mathematical physical theory made possible observation, measurement, and technological use of the microstructure of the world in ways that the ancients could not have imagined. I am inclined towards the latter view.

It is also sobering for a philosopher of social science to realize that there is nothing comparable to this history in the history of the social sciences. There is no comparable period where fundamental and enduring new insights into the underlying nature of the social world became possible to a degree comparable to this development of our understanding of the physical world. In my view as a philosopher of social science, that is perfectly understandable; the social world is not like the physical world. Social knowledge depends on fairly humdrum discoveries about actors, motives, and constraints. But the comparison ought to make us humble even as we explore new theoretical ideas in sociology and political science.

If I were asked to recommend only one out of all these books for a first read, it would be David Cassidy's Heisenberg volume, Beyond Uncertainty. Cassidy makes sense of the physics in a serious but not fully technical way, and he raises important questions about Heisenberg the man, including his role in the German search for the atomic bomb. Also valuable is Richard Rhodes' book, The Making of the Atomic Bomb: 25th Anniversary Edition.


Monday, December 19, 2016

Menon and Callender on the physics of phase transitions


In an earlier post I considered the topic of phase transitions as a possible source of emergent phenomena (link). I argued there that phase transitions are indeed interesting, but don't raise a serious problem of strong emergence. Tarun Menon considers this issue in substantial detail in the chapter he co-authored with Craig Callender in The Oxford Handbook of Philosophy of Physics, "Turn and face the strange ... ch-ch-changes: Philosophical questions raised by phase transitions" (link). Menon and Callender provide a very careful and logical account of three ways of approaching the physics of phase transitions within physics and three versions of emergence (conceptual, explanatory, ontological). The piece is technical but very interesting, with a somewhat deflating conclusion (if you are a fan of emergence):
We have found that when one clarifies concepts and digs into the details, with respect to standard textbook statistical mechanics, phase transitions are best thought of as conceptually novel, but not ontologically or explanatorily irreducible. 
Menon and Callendar review three approaches to the phenomenon of phase transition offered by physics: classical thermodynamics, statistical mechanics, and renormalization group theory. Thermodynamics describes the behavior of materials (gases, liquids, and solids) at the macro level; and statistical mechanics and renormalization group theory are theories of the micro states of materials intended to allow derivation of the macro behavior of the materials from statistical properties of the micro states. They describe this relationship in these terms:
Statistical mechanics is the theory that applies probability theory to the microscopic degrees of freedom of a system in order to explain its macroscopic behavior. The tools of statistical mechanics have been extremely successful in explaining a number of thermodynamic phenomena, but it turned out to be particularly difficult to apply the theory to the study of phase transitions. (193)
Here is the mathematical definition of phase transition that they provide:
Mathematically, phase transitions are represented by nonanalyticities or singularities in a thermodynamic potential. A singularity is a point at which the potential is not infinitely differentiable, so at a phase transition some derivative of the thermo­dynamic potential changes discontinuously. (191)
And they offer this definition:

(Def 1) An equilibrium phase transition is a nonanalyticity in the free energy. (194)

Here is their description of how the renormalization group theory works:
To explain the method, we return to our stalwart Ising model. Suppose we coarse­grain a 2­D Ising model by replacing 3 × 3 blocks of spins with a single spin pointing in the same direction as the majority in the original block. This gives us a new Ising system with a longer distance between lattice sites, and possibly a different coupling strength. You could look at this coarse­graining procedure as a transformation in the Hamiltonian describing the system. Since the Hamiltonian is characterized by the coupling strength, we can also describe the coarse­graining as a transformation in the coupling parameter. Let K be the coupling strength of the original system and R be the relevant transformation. The new coupling strength is K′ = RK. This coarse­graining procedure could be iterated, producing a sequence of coupling parameters, each related to the previous one by the transformation R. The transformation defines a flow on parameter space. (195)
Renormalization group theory, then, is essentially the mathematical basis of coarse-graining analysis (link).

The key difficulty that has been used to ground arguments about strong emergence of phase transitions is now apparent: there seems to be a logical disjunction between the resources of statistical mechanics and the findings of thermodynamics. In theory physicists would like to hold that statistical mechanics provides the micro-level representation of the phenomena described by thermodynamics; or in other words, that thermodynamic facts can be reduced to derivations from statistical mechanics. However, the definition of a phase transition above specifies that the phenomena display "nonanalyticities" -- instantaneous and discontinuous changes of state. It is easily demonstrated that the equations used in statistical mechanics do not display nonanalyticities; change may be abrupt, but it is not discontinuous, and the equations are infinitely differentiable. So if phase transitions are points of nonanalyticity, and statistical mechanics does not admit of nonanalytic equations, then it would appear that thermodynamics is not derivable from statistical mechanics. Similar reasoning applies to renormalization group theory.

This problem was solved within statistical mechanics by admitting of infinitely many bodies within the system that is represented (or alternatively, admitting of infinitely compressed volumes of bodies); but neither of these assumptions of infinity is realistic of the material world.

So are phase transitions "emergent" phenomena in either a weak sense or a strong sense, relative to the micro-states of the material in question? The strongest sense of emergence is what Menon and Callender call ontological irreducibility.
Ontological irreducibility involves a very strong failure of reduction, and if any phenomenon deserves to be called emergent, it is one whose description is ontologically irreducible to any theory of its parts. Batterman argues that phase transitions are emergent in this sense (Batterman 2005). It is not just that we do not know of an adequate statistical mechanical account of them, we cannot construct such an account. Phase transitions, according to this view, are cases of genuine physical discontinuities. (215)
The possibility that phase transitions are ontologically emergent at the level of thermodynamics is raised by the point about the mathematical characteristics of the equations that constitute the statistical mechanics description of the micro level -- the infinite differentiability of those equations. But Menon and Callender give a compelling reason for thinking this is misleading. They believe that phase transitions constitute a conceptual novelty with respect to the resources of statistical mechanics -- phase transitions do not correspond to natural kinds at the level of the micro-constitution of the material. But they argue that this does not establish that the phenomena cannot be explained or derived from a micro-level description. So phase transitions are not emergent according to the explanatory or ontological understandings of that idea.

The nub of the issue comes down to how we construe the idealization of statistical mechanics that assumes that a material consists of an infinite number of elements. This is plainly untrue of any real system (gas, liquid, or solid). The fact that there are boundaries implies that important thermodynamic properties are not "extensive" with volume: twice the volume leads to twice the entropy. But the way in which the finitude of a volume of material affects its behavior is through the effects of novel behaviors at the edges of the volume. And in many instances these effects are small relative to the behavior of the whole, if the volume is large enough.
Does this fact imply that there is a great mystery about extensivity, that extensivity is truly emergent, that thermodynamics does not reduce to finite N statistical mechanics? We suggest that on any reasonably uncontentious way of defining these terms, the answer is no. We know exactly what is happening here. Just as the second law of thermodynamics is no longer strict when we go to the microlevel, neither is the concept of extensivity. (201-202)
There is an important idealization on the thermodynamic description as well -- the notion that several specific kinds of changes are instantaneous or discontinuous. But this assumption can also be seen as an idealization, corresponding to a physical system that is undergoing changes at different rates under different environmental conditions. What thermodynamics describes as an instantaneous change from liquid to gas may be better understood as a rapid process of change at the molar level which can be traced through in a continuous way.

(The fact that some systems are coarse-grained has an interesting implication for this set of issues (link). The interesting implication is that while it is generally true that the micro states in such a system entail the macro states, the reverse is not true: we cannot infer from a given macro state to the exact underlying micro state. Rather, many possible micro states correspond to a given macro state.)

The conclusion they reach is worth quoting:
Phase transitions are an important instance of putatively emergent behavior. Unlike many things claimed emergent by philosophers (e.g., tables and chairs), the alleged emergence of phase transitions stems from both philosophical and scientific arguments. Here we have focused on the case for emergence built from physics. We have found that when one clarifies concepts and digs into the details, with respect to standard textbook statistical mechanics, phase transitions are best thought of as conceptually novel, but not ontologically or explanatorily irreducible. And if one goes past textbook statistical mechanics, then an argument can be made that they are not even conceptually novel. In the case of renormalization group theory, consideration of infinite systems and their singular behavior provides a central theoretical tool, but this is compatible with an explanatory reduction. Phase transitions may be “emergent” in some sense of this protean term, but not in a sense that is incompatible with the reductionist project broadly construed. (222)
Or in other words, Menon and Callender refute one of the most technically compelling interpretations of ontological emergence in physical systems. They show that the phenomena of phase transitions as described by classical thermodynamics are compatible with being reduced to the dynamics of individual elements at the micro-level, so phase transitions are not ontologically emergent.

Are these arguments relevant in any way to debates about emergence in social system dynamics? The direct relevance is limited, since these arguments depend entirely on the mathematical properties of the ways in which the micro-level of physical systems are characterized (statistical mechanics). But the more general lesson does in fact seem relevant: rather than simply postulating that certain social characteristics are ontologically emergent relative to the actors that make them up, we would be better advised to look for the local-level processes that act to bring about surprising transitions at critical points (for example, the shift in a flock of birds from random flight to a swarm in a few seconds).

Thursday, November 24, 2016

Coarse-graining of complex systems


The question of the relationship between micro-level and macro-level is just as important in physics as it is in sociology. Is it possible to derive the macro-states of a system from information about the micro-states of the system? It turns out that there are some surprising aspects of the relationship between micro and macro that physical systems display. The mathematical technique of "coarse-graining" represents an interesting wrinkle on this question. So what is coarse-graining? Fundamentally it is the idea that we can replace micro-level specifics with local-level averages, without reducing our ability to calculate macro-level dynamics of behavior of a system.

A 2004 article by Israeli and Goldenfeld, "Coarse-graining of cellular automata, emergence, and the predictability of complex systems" (link) provides a brief description of the method of coarse-graining. (Here is a Wolfram demonstration of the way that coarse graining works in the field of cellular automata; link.) Israeli and Goldenfeld also provide physical examples of phenomena with what they refer to as emergent characteristics. Let's see what this approach adds to the topic of emergence and reduction. Here is the abstract of their paper:
We study the predictability of emergent phenomena in complex systems. Using nearest neighbor, one-dimensional Cellular Automata (CA) as an example, we show how to construct local coarse-grained descriptions of CA in all classes of Wolfram's classification. The resulting coarse-grained CA that we construct are capable of emulating the large-scale behavior of the original systems without accounting for small-scale details. Several CA that can be coarse-grained by this construction are known to be universal Turing machines; they can emulate any CA or other computing devices and are therefore undecidable. We thus show that because in practice one only seeks coarse-grained information, complex physical systems can be predictable and even decidable at some level of description. The renormalization group flows that we construct induce a hierarchy of CA rules. This hierarchy agrees well apparent rule complexity and is therefore a good candidate for a complexity measure and a classification method. Finally we argue that the large scale dynamics of CA can be very simple, at least when measured by the Kolmogorov complexity of the large scale update rule, and moreover exhibits a novel scaling law. We show that because of this large-scale simplicity, the probability of finding a coarse-grained description of CA approaches unity as one goes to increasingly coarser scales. We interpret this large scale simplicity as a pattern formation mechanism in which large scale patterns are forced upon the system by the simplicity of the rules that govern the large scale dynamics.
This paragraph involves several interesting ideas. One is that the micro-level details do not matter to the macro outcome (italics above). Another related idea is that macro-level patterns are (sometimes) forced by the "rules that govern the large scale dynamics" -- rather than by the micro-level states.

Coarse-graining methodology is a family of computational techniques that permits "averaging" of values (intensities) from the micro-level to a higher level of organization. The computational models developed here were primarily applied to the properties of heterogeneous materials, large molecules, and other physical systems. For example, consider a two-dimensional array of iron atoms as a grid with randomly distributed magnetic orientations (up, down). A coarse-grained description of this system would be constructed by taking each 3x3 square of the grid and assigning it the up-down value corresponding to the majority of atoms in the grid. Now the information about nine atoms has been reduced to a single piece of information for the 3x3 grid. Analogously, we might consider a city of Democrats and Republicans. Suppose we know the affiliation of each household on every street. We might "coarse-grain" this information by replacing the household-level data with the majority representation of 3x3 grids of households. We might take another step of aggregation by considering 3x3 grids of grids, and representing the larger composite by the majority value of the component grids.

How does the methodology of coarse-graining interact with other inter-level questions we have considered elsewhere in Understanding Society (emergence, generativity, supervenience)? Israeli and Goldenfeld connect their work to the idea of emergence in complex systems. Here is how they describe emergence:
Emergent properties are those which arise spontaneously from the collective dynamics of a large assemblage of interacting parts. A basic question one asks in this context is how to derive and predict the emergent properties from the behavior of the individual parts. In other words, the central issue is how to extract large-scale, global properties from the underlying or microscopic degrees of freedom. (1)
Note that this is the weak form of emergence (link); Israeli and Goldenfeld explicitly postulate that the higher-level properties can be derived ("extracted") from the micro level properties of the system. So the calculations associated with coarse-graining do not imply that there are system-level properties that are non-derivable from the micro-level of the system; or in other words, the success of coarse-graining methods does not support the idea that physical systems possess strongly emergent properties.

Does the success of coarse-graining for some systems have implications for supervenience? If the states of S can be derived from a coarse-grained description C of M (the underlying micro-level), does this imply that S does not supervene upon M? It does not. A coarse-grained description corresponds to multiple distinct micro-states, so there is a many-one relationship between M and C. But this is consistent with the fundamental requirement of supervenience: no difference at the higher level without some difference at the micro level. So supervenience is consistent with the facts of successful coarse-graining of complex systems.

What coarse-graining is inconsistent with is the idea that we need exact information about M in order to explain or predict S. Instead, we can eliminate a lot of information about M by replacing M with C, and still do a perfectly satisfactory job of explaining and predicting S.

There is an intellectual wrinkle in the Israeli and Goldenfeld article that I haven't yet addressed here. This is their connection between complex physical systems and cellular automata. A cellular automaton is a simulation governed by simple algorithms governing the behavior of each cell within the simulation. The game of Life is an example of a cellular automaton (link). Here is what they say about the connection between physical systems and their simulations as a system of algorithms:
The problem of predicting emergent properties is most severe in systems which are modelled or described by undecidable mathematical algorithms[1, 2]. For such systems there exists no computationally efficient way of predicting their long time evolution. In order to know the system’s state after (e.g.) one million time steps one must evolve the system a million time steps or perform a computation of equivalent complexity. Wolfram has termed such systems computationally irreducible and suggested that their existence in nature is at the root of our apparent inability to model and understand complex systems [1, 3, 4, 5]. (1)
Suppose we are interested in simulating the physical process through which a pot of boiling water undergoes sudden turbulence shortly before 100 degrees C (the transition point between water and steam). There seem to be two large alternatives raised by Israeli and Goldenfeld: there may be a set of thermodynamic processes that permit derivation of the turbulence directly from the physical parameters present during the short interval of time; or it may be that the only way of deriving the turbulence phenomenon is to provide a molecule-level simulation based on the fundamental laws (algorithms) that govern the molecules. If the latter is the case, then simulating the process will prove computationally impossible.

Here is an extension of this approach in an article by Krzysztof Magiera and Witold Dzwinel, "Novel Algorithm for Coarse-Graining of Cellular Automata" (link). They describe "coarse-graining" in their abstract in these terms:
The coarse-graining is an approximation procedure widely used for simplification of mathematical and numerical models of multiscale systems. It reduces superfluous – microscopic – degrees of freedom. Israeli and Goldenfeld demonstrated in [1,2] that the coarse-graining can be employed for elementary cellular automata (CA), producing interesting interdependences between them. However, extending their investigation on more complex CA rules appeared to be impossible due to the high computational complexity of the coarse-graining algorithm. We demonstrate here that this complexity can be substantially decreased. It allows for scrutinizing much broader class of cellular automata in terms of their coarse graining. By using our algorithm we found out that the ratio of the numbers of elementary CAs having coarse grained representation to “degenerate” – irreducible – cellular automata, strongly increases with increasing the “grain” size of the approximation procedure. This rises principal questions about the formal limits in modeling of realistic multiscale systems.
Here K&D seem to be expressing the view that the approach to coarse-graining as a technique for simplifying the expected behavior of a complex system offered by Israeli and Goldenfeld will fail in the case of more extensive and complex systems (perhaps including the pre-boil turbulence example mentioned above).

I am not sure whether these debates have relevance for the modeling of social phenomena. Recall my earlier discussion of the modeling of rebellion using agent-based modeling simulations (link, link, link). These models work from the unit level -- the level of the individuals who interact with each other. A coarse-graining approach would perhaps replace the individual-level description with a set of groups with homogeneous properties, and then attempt to model the likelihood of an outbreak of rebellion based on the coarse-grained level of description. Would this be feasible?